text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Draft PathArray Description The PathArray tool places copies of a selected shape along a selected path, which can be a Draft Wire, a Draft BSpline, and similar edges. The PathArray tool can be used on 2D shapes created with the Draft Workbench, but can also be used on many types of 3D objects such as those created with the Part, PartDesign, or Arch Workbenches. To position copies in an orthogonal array use Draft Array; to position copies at specified points use Draft PointArray; to create copies or clones, and manually place them use Draft Move, Draft Rotate, and Draft Clone. Object arranged along a path How to use - Select an object that you wish to distribute. - Select a path object or some edges along which the object will be distributed. - Press the Draft PathArray button. -. The base object should be centred around the origin, even if the path starts somewhere else. Options There are no options for this tool. Either it works with the selected objects or not. Properties - DATABase: specifies the object to duplicate in the path. - DATAPathObj: specifies the path object. - DATAPathSubs: specifies the sub-elements (edges) of the path object. This property does not yet appear in the property editor. - DATACount: specifies the number of copies of the base object. - DATAAlign: if it is True the copies are aligned to the path; otherwise they are left in their default orientation. - Note: in certain cases the shape will appear flat, in reality it may have moved in the 3D space, so instead of using a flat view, change the view to axonometric. - DATAXlate: specifies a translation vector (x, y, z) to displace each copy along the path. - Note: when DATAAlign is True, the vector is relative to the local tangent, normal or binormal coordinates; otherwise the vector is relative to the global coordinates. Scripting See also: Draft API and FreeCAD Scripting Basics. The PathArray tool can be used in macros and from the Python console by using the following function: PathArray = makePathArray(baseobject, pathobject, count, xlate=None, align=False, pathobjsubs=[]) - Creates a PathArrayobject from the baseobject, by placing as many as countcopies along pathobject. - If pathobjsubsis given, it is a list of sub-objects of pathobject, and the copies are created along this shorter path. - If xlateis given, it is a FreeCAD.Vectorthat indicates an additional displacement to move the base point of the copies. - If alignis Truethe copies are aligned to the tangent, normal or binormal of the pathobjectat the point where the copy is placed. Example: import FreeCAD,Draft p1 = FreeCAD.Vector(500, -1000, 0) p2 = FreeCAD.Vector(1500, 1000, 0) p3 = FreeCAD.Vector(3000, 500, 0) p4 = FreeCAD.Vector(4500, 100, 0) spline = Draft.makeBSpline([p1, p2, p3, p4]) object = Draft.makePolygon(3, 500) PathArray = Draft.makePathArray(object, spline, 6) Technical explanation for the Align property When DATAAlign is False, the placement of the copied shapes is easy to understand; they are just moved to a different position in their original orientation. Object arranged along a closed path in the original orientation When DATAAlign is True, the positioning of the shapes becomes a bit more complex: - First, Frenet coordinate systems are built on the path: X is tangent, Z is normal, Y is binormal. - Then the original object is copied to every on-path coordinate system, so that the global origin is matched with the on-path coordinate system origin. Object arranged along a closed path; description of components and path The following images show how the array is produced, depending on which plane the path is. Path on XY Plane: Object arranged along a closed path which is aligned to the XY plane Path on XZ Plane: Object arranged along a closed path which is aligned to the XZ plane Path on YZ Plane: Object arranged along a closed path which is aligned to the YZ plane As you reorient the path but not the object, the result is consistent: the object remains aligned to the path the way it was before reorienting the path. Editor: thank you to user DeepSOIC for this explanation. -
https://www.freecadweb.org/wiki/Draft_PathArray/en
CC-MAIN-2019-43
refinedweb
680
61.87
right click context menu does not appear only for Java 1.7, AWT TextField I have been debugging some issues an application has been having with Java 1.7 versus older versions. A new problem I have encountered is that the right click context menu does not function in any TextField. It works fine when running it with any previous version of Java. I have tried coding a simple test with a Frame, Panel and TextField to see if it might be something else in the more complex application that was causing it, but the simple test class has the same problem. I have searched for other people having the same issue, but I have not found anything comparable. This seems like a huge change from one version to the next and I am surprised that I am not finding this mentioned anywhere else. Can someone point me to anything that discusses this issue that I am having? My simple test: import java.awt.*; import java.awt.event.*; import java.util.*; class testF3 extends Panel { public static void main(String args[]) { Frame f = new Frame(); Panel p = new Panel(); f.setLayout(new BorderLayout()); f.add("North", p); TextField tf1 = new TextField("", 20); p.add(tf1); Dimension medm = f.getSize(); medm.height = 100; medm.width = 200; f.setSize(medm); f.setVisible(true); } }
https://www.java.net/forum/topic/general-programming-help/right-click-context-menu-does-not-appear-only-java-17-awt-textfield
CC-MAIN-2015-32
refinedweb
221
75.61
Created on 2015-10-21 08:56 by None Becoming, last changed 2019-04-05 10:29 by serhiy.storchaka. This issue is now closed. The transparency methods of tkinter.PhotoImage seem to be missing. Presumably, they would go something like: def transparency_get(self, x, y): """Returns a boolean indicating if the pixel at (x,y) is transparent. """ return self.tk.call(self.name, 'transparency', 'get', x, y) def transparency_set(self, x, y, boolean=True): """Make pixel at (x,y) transparent if boolean is true, opaque otherwise. """ self.tk.call(self.name, 'transparency', 'set', x, y, boolean) I've created a PR for this issue. New changeset 50866e9ed3e4e0ebb60c20c3483a8df424c02722 by Serhiy Storchaka (Zackery Spytz) in branch 'master': bpo-25451: Add transparency methods to tkinter.PhotoImage. (GH-10406)
https://bugs.python.org/issue25451
CC-MAIN-2019-18
refinedweb
126
50.23
In this chapter, we discuss the parsing of XML documents using an XML processor. Parsing is the process of reading a document and dissecting it into its elements and attributes, which can then be analyzed. In XML, parsing is done by an XML processor, the most fundamental building block of a Web application. In this book, Apache Xerces is used as an implementation of the XML processor, and we show how to design and develop Web applications. As the first step of this chapter, we begin to set up your programming environment in Xerces and Java. Next, we discuss how to read and parse a simple XML document. We use various examples, including well-formed and valid documents with Document Type Definitions (DTDs) or XML Schema, a document that contains namespaces. We finish by explaining how to do basic programming using common APIs: DOM and SAX. As explained in Chapter 1, an XML processor is a software module that reads XML documents and provides application programs with access to their content and structure. The XML 1.0 specification from W3C precisely defines the functions of an XML processor. The behavior of a conforming XML processor is highly predictable, so using other conforming XML processors should not be difficult. Figure shows the role of an XML processor. An XML processor is a bridge between an XML document and an application. The XML processor parses and generates an XML document. The application uses an API to access objects that represent part of the XML document. DOM and SAX are well-known APIs for accessing the structure of an XML document. Throughout this book, you will learn the details of these APIs. XML processors are categorized as validating and non-validating processors (see Section 1.4.2 for an explanation of validity and well-formedness). When reading an XML document, a non-validating processor checks the well-formedness constraints as defined in the XML 1.0 specification and reports any violations. A validating processor must check the validity constraints and the well-formedness constraints. In this book, we use the Java version of Apache Xerces, a validating (and non-validating) XML processor. Xerces was developed by the Apache Xerces team (one of the authors is a main member of the development team) and is one of the most robust and faithful implementations of an XML processor. In the first edition of this book, the XML for Java Parser (aka XML4J), developed by another one of the authors, was used. XML for Java was donated to Apache, an open source community in 1999, and now it is called Xerces. If you want to use Xerces commercially, please read the license document on the Apache Xerces Web site (). The complete current release of Xerces is included on the accompanying CD-ROM. You can also download the latest version of Xerces from the Apache Xerces Web site. Before installing Xerces, you need to set up your Java programming environment. All the programs used in this book have been tested against the Java 2 SDK (versions 1.2 and 1.3). The setup steps are as follows: Install the Java 2 SDK (version 1.2 or 1.3). Install Xerces version 1.4.3. Add Xerces's jar files to the CLASSPATH environment variable. Xerces is written in Java, so you first need to have Java 2 installed on your system. If needed, you can download the latest release from the Sun Microsystems Web site at. In this book, we assume you have installed the Java 2 SDK in C:\jdk. The second step in setting up your programming environment is to install Xerces. In developing our sample programs, we used Xerces version 1.4.3. The CD-ROM that accompanies this book contains that version. To install Xerces: Install Xerces on your system. On the CD-ROM, move to the directory containing Xerces. Unzip Xerces-J-bin.1.4.3.zip. We assume you have installed Xerces in C:\xerces-1_4_3. Note that because Xerces is written in Java, theoretically it can run on any operating system platform on any hardware that supports Java. However, platforms might differ, for example, in how to set the environment variable. We use Windows (95/98/Me/NT/2000) in our command-line input/output examples in this book. If your platform is other than these, you should replace the command prompts and certain shell commands with those appropriate for your platform. The third step in setting up your programming environment is to set the CLASSPATH environment variable to tell the Java interpreter where to find the Java libraries. To execute the sample programs in this book, you must have in your CLASSPATH the jar files c:\xerces-1_4_3\xerces.jar and c:\xerces-1_4_3\xercesSamples.jar. You might also want to include the current directory (.) and the sample directory of the CD-ROM (R:\samples) in your CLASSPATH. You can set both of these in Windows 95/98/Me by using the following command: c:\xerces-1_4_3>set CLASSPATH=".;c:\xerces-1_4_3\xerces.jar;c:\ xerces-1_4_3\xercesSamples.jar" You might also want to add this command line to your profile to avoid having to type it every time you bring up a new command prompt. In Windows 95/98/Me, you add it to the autoexec.bat file. In Windows NT, you add it by right-clicking My Computer and then left-clicking System Properties and the Environment tab; then add the new variable CLASSPATH (similar operations are needed in Windows 2000). NOTE When you are working with Xerces, you might want to know what the version is. The easiest way to find out is to type the following commands: R:\samples>java org.apache.xerces.framework.Version Xerces 1.4.3 If you are using the Java 2 SDK provided by IBM, you should be careful which version of Xerces you add in your CLASSPATH. Because in IBM's Java 2 SDK 1.3, Xerces is located in the directory jdk\jre\lib\ext, all the jar files in this directory are recognized by the Java interpreter before reading CLASSPATH. If the version of Xerces is old, you would face some errors. To avoid this, you can simply delete xerces.jar or replace it with the latest version. Another way to use an appropriate version of Xerces is to specify -Djava.ext.dirs=nulldir when you execute the Java command. This option tells the interpreter not to load the jar files in the ext directory. To see whether the installation was successful, move to the installation directory (c:\xerces-1_4_3) and enter the following commands: c:\xerces-1_X_X>java sax.Counter data/personal.xml data/personal.xml: 2.160 ms (37 elems, 18 attrs, 140 spaces, 128 chars) This program parses an XML document and reports the number of elements, attributes, and so on. An alternative way to tell the Java interpreter where to find the jar files is to enter the following command: c:\xerces-1_4_3>java -classpath "c:\xerces-1_4_3\xerces.jar;c:\ xerces-1_4_3\xercesSamples.jar" sax.SAXCount data/personal.xml data/ personal.xml: 260 ms (37 elems, 18 attrs, 140 spaces, 128 chars) Now you are ready to try the sample programs on the CD-ROM. Go to the samples directory, which contains all the samples in this chapter. Note that in our samples we use "R" for the CD-ROM drive; you should substitute the correct letter for your own CD-ROM drive. The samples directory contains sample programs for each chapter, and package names are assigned to the classes. For example, the SimpleParse class used in this chapter has the package name chap02. Enter the following command to launch the program SimpleParse to read the document department.xml: R:\samples>java chap02.SimpleParse chap02/department.xml You will see nothing. However, this is expected because this sample program produces no output if successful. All the sample programs in this book are included on the CD-ROM. Installation instructions for the tools used in the chapters are described in the readme.html file stored in directories for each chapter. Take a few moments to explore the CD-ROM before moving on
http://codeidol.com/community/java/parsing-xml-documents/12582/
CC-MAIN-2017-17
refinedweb
1,369
57.06
CacheKey class¶ (Shortest import: from brian2.utils.caching import CacheKey) - class brian2.utils.caching.CacheKey[source]¶ Mixin class for objects that will be used as keys for caching (e.g. Variableobjects) and have to define a certain “identity” with respect to caching. This “identity” is different from standard Python hashing and equality checking: a Variablefor example would be considered “identical” for caching purposes regardless which object (e.g. NeuronGroup) it belongs to (because this does not matter for parsing, creating abstract code, etc.) but this of course matters for the values it refers to and therefore for comparison of equality to other variables. Classes that mix in the CacheKeyclass should re-define the _cache_irrelevant_attributesattribute to note all the attributes that should be ignored. The property _state_tuplewill refer to a tuple of all attributes that were not excluded in such a way; this tuple will be used as the key for caching purposes. Attributes Details
https://brian2.readthedocs.io/en/latest/reference/brian2.utils.caching.CacheKey.html
CC-MAIN-2022-40
refinedweb
154
55.13
This set of Python Scripting Questions & Answers focuses on “Files”. 1. Which is/are the basic I/O connections in file? a) Standard Input b) Standard Output c) Standard Errors d) All of the mentioned View Answer Explanation: Standard input, standard output and standard error. Standard input is the data that goes to the program. The standard input comes from a keyboard. Standard output is where we print our data with the print keyword. Unless redirected, it is the terminal console. The standard error is a stream where programs write their error messages. It is usually the text terminal. 2. What is the output of this program? import sys print 'Enter your name: ', name = '' while True: c = sys.stdin.read(1) if c == '\n': break name = name + c print 'Your name is:', name If entered name is sanfoundry a) sanfoundry b) sanfoundry, sanfoundry c) San d) None of the mentioned View Answer Explanation: In order to work with standard I/O streams, we must import the sys module. The read() method reads one character from the standard input. In our example we get a prompt saying “Enter your name”. We enter our name and press enter. The enter key generates the new line character: \n. Output: Enter your name: sanfoundry Your name is: sanfoundry 3. What is the output of this program? import sys sys.stdout.write(' Hello\n') sys.stdout.write('Python\n') a) Compilation Error b) Runtime Error c) Hello Python d) Hello Python View Answer Explanation: None Output: Hello Python 4. Which of the following mode will refer to binary data? a) r b) w c) + d) b View Answer Explanation: Mode Meaning is as explained below: r Reading w Writing a Appending b Binary data + Updating. 5. What is the pickling? a) It is used for object serialization b) It is used for object deserialization c) None of the mentioned d) All of the mentioned View Answer Explanation: Pickle is the standard mechanism for object. 6. What is unpickling? a) It is used for object serialization b) It is used for object deserialization c) None of the mentioned d) All of the mentioned View Answer Explanation: We have been working with simple textual data. What if we are working with objects rather than simple text? For such situations, we can use the pickle module. This module serializes Python objects. The Python objects are converted into byte streams and written to text files. This process is called pickling. The inverse operation, reading from a file and reconstructing objects is called deserializing or unpickling. 7. What is the correct syntax of open() function? a) file = open(file_name [, access_mode][, buffering]) b) file object = open(file_name [, access_mode][, buffering]) c) file object = open(file_name) d) none of the mentioned View Answer Explanation: Open() function correct syntax with the parameter details as shown below: file object = open(file_name [, access_mode][, buffering]) Here is parameters’ detail: file_name: The file_name argument is a string value that contains the name of the file that you want to access. access_mode: The access_mode determines the mode in which the file has to be opened, i.e., read, write, append, etc. A complete list of possible values is given below in the table. This is optional parameter and the default file access mode is read (r). buffering: If the buffering value is set to 0, no buffering will take place. If the buffering value is 1, line buffering will be performed while accessing a file. If you specify the buffering value as an integer greater than 1, then buffering action will be performed with the indicated buffer size. If negative, the buffer size is the system default(default behavior). 8. What is the output of this program? fo = open("foo.txt", "wb") print "Name of the file: ", fo.name fo.flush() fo.close() a) Compilation Error b) Runtime Error c) No Output d) Flushes the file when closing them View Answer Explanation: The method flush() flushes the internal buffer. Python automatically flushes the files when closing them. But you may want to flush the data before closing any file. 9. Correct syntax of file.writelines() is? a) file.writelines(sequence) b) fileObject.writelines() c) fileObject.writelines(sequence) d) none of the mentioned View Answer Explanation: The method writelines() writes a sequence of strings to the file. The sequence can be any iterable object producing strings, typically a list of strings. There is no return value. Syntax Following is the syntax for writelines() method: fileObject.writelines( sequence ). 10. Correct syntax of file.readlines() is? a) fileObject.readlines( sizehint ); b) fileObject.readlines(); c) fileObject.readlines(sequence) d) none of the mentioned View Answer Explanation: The. Syntax Following is the syntax for readlines() method: fileObject.readlines( sizehint ); Parameters sizehint — This is the number of bytes to be read from the file. Sanfoundry Global Education & Learning Series – Python. To practice all scripting questions on Python, here is complete set of 1000+ Multiple Choice Questions and Answers.
https://www.sanfoundry.com/python-scripting-questions-answers/
CC-MAIN-2018-30
refinedweb
823
58.79
#include <mqueue.h> #include <time.h> #include <mqueue.h> m: If the message queue is full, and the timeout has already expired by the time of the call, mq_timedsend() returns immediately. On success, mq_send() and mq_timedsend() return zero; on error, −1 is returned, with errno set to indicate the error. The queue was full, and the O_NONBLOCK flag was set for the message queue description referred to by mqdes. The descriptor specified in mqdes was invalid or not opened for writing. The call was interrupted by a signal handler; see signal(7). The call would have blocked, and abs_timeout was invalid, either because tv_sec was less than zero, or because tv_nsec was less than zero or greater than 1000 million. msg_len was greater than the mq_msgsize attribute of the message queue. The call timed out before a message could be transferred. For an explanation of the terms used in this section, see attributes(7). On Linux, mq_timedsend() is a system call, and mq_send() is a library function layered on top of that system call. mq_close(3), mq_getattr(3), mq_notify(3), mq_open(3), mq_receive(3), mq_unlink(3), mq_overview(7), time(7)
http://manpages.courier-mta.org/htmlman3/mq_send.3.html
CC-MAIN-2019-18
refinedweb
190
66.13
Author(s): Tanveer Hur. Statistics Ljung-Box or Durbin Watson — Which test is more powerful Durbin Watson is more powerful but there is a catch. Read on to know more. When it comes to statistical testing, one of the most important factors that we look for is the power of the test, which may be briefly defined as follows: Power of a test: The probability that the test will reject the null hypothesis when the alternate hypothesis is true. In simple words higher the probability of a test to detect the True Positive, the higher its power is. This will become more lucid throughout this article. We will check two statistical tests: Ljung-Box and Durbin Watson for their power and draw a conclusion of which one to use and when. Both Ljung-Box and Durbin Watson are used roughly for the same purpose i.e. to check the autocorrelation in a data series. While Ljung-Box can be used for any lag value, Durbin Watson can be used just for the lag of 1. The Null and Alternate hypotheses for both the tests are the same: H0: There is no autocorrelation in the data. H1: There exists a significant autocorrelation. We will use python libraries to carry out the experiment and the procedure of the experiment will be as follows: - Create a random data-set (no correlation case) - Carry out Ljung-Box and Durbin Watson test on it and record the output. - Repeat step 2 multiple times (1000 times) to check the probability of the test to reject the null hypothesis. Probability to give out False Positive. - Calculate the power of the test: 1 — value obtained in step 3. We first need to load all the required libraries: from statsmodels.stats.api import acorr_ljungbox from statsmodels.stats.stattools import durbin_watson import numpy as np import matplotlib.pyplot as plt We will create a random dataset first using random.normal() function from Numpy, this will create a random number picked from a standard normal distribution. sample_size = 150 random_data = [np.random.normal() for i in range(sample_size)] The two tests imported from statsmodels library can be used directly to calculate the test statistic and p-value. Here it becomes prudent to make it clear that in the case of the Durbin Watson test, we fail to reject the null hypothesis if the test-statistic is around 2 and reject the null hypothesis otherwise. In the case of the Ljung-Box test, the decision can be taken by using the p-value that the test throws. The whole logic can be given the shape of a function as shown below: def run_test(sample_size): random_data = [np.random.normal() for i in range(sample_size)] #create random data with given sample size dw = durbin_watson(random_data) if(dw > 1.8 and dw < 2.2): #A tolerance of 0.2 is kept to decide in case of DW dw = 0 else: dw = 1 ljung = float(acorr_ljungbox(random_data, lags = 1)[1]) #The acorr_ljung_box() returns both test-statistic and p-value, index of 1 is used to access the p-val. if(ljung > 0.05): #Significance level of 5% is considered ljung = 0 else: ljung = 1 return dw, ljung Both of these tests return 0 if the null hypothesis is not rejected and 1 otherwise. Ideally, the function defined above should always return 0 as we are testing the data series of random nature. A value of 1 returned by the function will be a False Positive and will be used to judge the power of these two tests. Now that we have run_test() function with us, we can call it again and again to calculate the power of these tests, but we will do it for not just a single sample size but multiple sample sizes to understand the relation of power with the size of the data. sample_sizes = [50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000] The sample sizes defined in the above python lists will be used to carry out this experiment and we will run the run_test() function for each sample size 1000 times. The below lines of code will do the job for us: sample_sizes = [50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800, 850, 900, 950, 1000] number_of_runs = 1000 #Creating empty lists to contain the results later for durbin watson(dw) and ljung-box (lb) dw_data = [] lb_data = [] for sample_size in sample_sizes: x = [run_test(sample_size) for i in range(number_of_runs)] #runtest() is called 1000 times for each sample size dw = [i[0] for i in x] lb = [i[1] for i in x] dw_per = np.sum(dw)/number_of_runs #calculatng fraction of times the null hypothesis was rejected lb_per = np.sum(lb)/number_of_runs #Populating the empty lists to contain the results dw_data.extend([dw_per]) lb_data.extend([lb_per]) We now have results with us and we are now at the stage to check how the power of these two tests relate to the sample size of the data. We will use the matplotlib library to plot the results to get the insights and inferences: plt.plot(sample_sizes, lb_data, label = ‘Ljung-Box’) plt.plot(sample_sizes, dw_data, label = ‘Durbin-Watson’) plt.xlabel(‘Sample Size’); plt.ylabel(‘1-Power’) plt.legend() plt.show() The above graph shows it clearly that for a small sample size, using a Durbin-Watson test is a bad idea as it has low power but for larger sample sizes it performs better than ljung-box. In the case of Ljung-Box, the power is consistent irrespective of the sample size. So which one to use depends on the sample size you have at your hand. This article is also published on Tea Statistic Ljung-Box or Durbin Watson — Which test is more powerful
https://towardsai.net/p/l/ljung-box-or-durbin-watson%E2%80%8A-%E2%80%8Awhich-test-is-more-powerful
CC-MAIN-2022-21
refinedweb
968
60.55
Getting started tutorial part 3: displaying exposures and source tables output by processCcd.py¶ In the previous tutorial in the series you used processCcd.py to calibrate a set of raw Hyper Suprime-Cam images. Now you’ll learn how to use the LSST Science Pipelines to inspect processCcd.py’s outputs by displaying images and source catalogs in the DS9 image viewer. In doing so, you’ll be introduced to some of the LSST Science Pipelines’ Python APIs, including: - Accessing datasets with the Butler. - Displaying images in DS9 with lsst.afw.display. - Working with source catalogs using lsst.afw.table. Figure 1 In this tutorial, you’ll create an image display like this one that includes mask planes and source markers.. You’ll also need to download and install the DS9 image viewer. Launch DS9 and start a Python interpreter¶ In this tutorial, you will use an interactive Python session to control DS9. If you haven’t already, launch the DS9 application. Next, start up a Python interpreter. You can use the default Python shell (python), the IPython shell, or even run from a Jupyter Notebook. Ensure that this Python session is running from the shell where you ran setup lsst_distrib. Creating a Butler client¶ All data in the Pipelines flows through the Butler. As you saw in the previous tutorial, processCcd.py read exposures from the Butler repository and persisted outputs back to the repository. Although this Butler data repository is a directory on the filesystem ( DATA), we don’t recommend directly accessing its files. Instead, you use the Butler client from the lsst.daf.persistence module. In the Python interpreter, run: import lsst.daf.persistence as dafPersist butler = dafPersist.Butler(inputs='DATA/rerun/processCcdOutputs') The Butler client reads from the data repository specified with the inputs argument. In the previous tutorial, you created the processCcdOutputs rerun to isolate the outputs of the processCcd.py command-line task. Reruns act like repositories, so to work with the processCcd.py outputs you specifically set inputs as the path to that rerun. Tip Reruns are sub-directories of the rerun directory of a root Butler data repository. Listing available data IDs in the Butler¶ To get data from the Butler you need to know two things: the dataset type and the data ID. Every dataset stored by the Butler has a well-defined type. Tasks read specific dataset types and output other specific dataset types. The processCcd.py command reads in raw datasets and outputs calexp, or calibrated exposure, datasets (among others). It’s calexp datasets that you’ll display in this tutorial. Data IDs let you reference specific instances of a dataset. On the command line you select data IDs with --id arguments, filtering by keys like visit, ccd, and filter. Now, use the Butler client to find what data IDs are available for the calexp dataset type: butler.queryMetadata('calexp', ['visit', 'ccd'], dataId={'filter': 'HSC-R'}) The printed output is a list of (visit, ccd) key tuples for all data IDs where the filter key is the HSC-R band: [(903334, 16), (903334, 22), (903334, 23), (903334, 100), (903336, 17), (903336, 24), (903338, 18), (903338, 25), (903342, 4), (903342, 10), (903342, 100), (903344, 0), (903344, 5), (903344, 11), (903346, 1), (903346, 6), (903346, 12)] Note That example butler.queryMetadata call is equivalent to this shell command that you used in the previous tutorial: processCcd.py DATA --rerun processCcdOutputs --id filter=HSC-R --show data Get an exposure through the Butler¶ Knowing a specific data ID, let’s get the dataset with the Butler client’s get method: calexp = butler.get('calexp', dataId={'filter': 'HSC-R', 'visit': 903334, 'ccd': 23}) This calexp is an ExposureF Python object. Exposures are powerful representations of image data because they contain not only the image data, but also a variance image for uncertainty propagation, a bit mask image plane, and key-value metadata. In the next steps you’ll learn how to display an Exposure’s image and mask. Create a display¶ To display the calexp you will use the display framework, which is imported as: import lsst.afw.display as afwDisplay The display framework provides a uniform API for multiple display backends, including DS9 and LSST’s Firefly viewer. The default backend is ds9, so you can create a display like this: display = afwDisplay.getDisplay() Note You can choose a different backend by setting the backend parameter. For example: display = afwDisplay.getDisplay(backend='firefly') Display the calexp (calibrated exposure)¶ Then use the display’s mtv method to view the calexp in DS9: display.mtv(calexp) As soon as you execute the command a single Hyper Suprime-Cam calibrated exposure, the {'filter': 'HSC-R', 'visit': 903334, 'ccd': 23} data ID, should appear in the DS9 application. Notice that the DS9 display is filled with colorful regions. These are mask regions. Each color reflects a different mask bit that correspond to detections and different types of detector artifacts. You’ll learn how to interpret these colors later, but first you’ll likely want to adjust the image display. Improving the image display¶ The display framework gives you control over the image display to help bring out image details. To make masked regions semi-transparent, so that underlying image features are visible, try: display.setMaskTransparency(60) The setMaskTransparency method’s argument can range from 0 (fully opaque) to 100 (fully transparent). You can also control the colorbar scaling algorithm with the display’s scale method. Try an asinh stretch with the zscale algorithm for automatically selecting the white and black thresholds: display.scale("asinh", "zscale") Instead of an automatic algorithm like zscale (or minmax) you can explicitly provide both a minimum (black) and maximum (white) value: display.scale("asinh", -1, 30) Interpreting displayed mask colors¶ The display framework renders each plane of the mask in a different color (plane being a different bit in the mask). To interpret these colors you can get a dictionary of mask planes from the calexp and query the display for the colors it rendered each mask plane with. Run: mask = calexp.getMask() for maskName, maskBit in mask.getMaskPlaneDict().items(): print('{}: {}'.format(maskName, display.getMaskPlaneColor(maskName))) As an example, this result is: DETECTED_NEGATIVE: cyan CROSSTALK: None INTRP: green DETECTED: blue UNMASKEDNAN: None NO_DATA: orange BAD: red EDGE: yellow SUSPECT: yellow NOT_DEBLENDED: None CR: magenta SAT: green Footprints of detected sources are rendered in blue and the saturated cores of bright stars are drawn in green. Getting the source catalog generated by processCcd.py¶ Besides the calibrated exposure ( calexp), processCcd.py also creates a table of the sources it used for PSF estimation as well as astrometric and photometric calibration. The dataset type of this table is src, which you can get from the Butler: src = butler.get('src', dataId={'filter': 'HSC-R', 'visit': 903334, 'ccd': 23}) This src dataset is a SourceTable, which is a table object from the lsst.afw.table module. You’ll explore SourceTables more in a later tutorial, but you can check its length with Python’s len function: print(len(src)) The columns of a table are defined in its schema. You can print out the schema to see each column’s name, data type, and description: print(src.getSchema()) To get just the names of columns, run: print(src.getSchema().getNames()) To get metadata about a specific column, like calib_psf_used: print(src.schema.find("calib_psf_used")) Given a name, you can get a column’s values as a familiar Numpy array like this: print(src['base_PsfFlux_instFlux']) Tip If you are working in a Jupyter notebook you can see an HTML table rendering of any lsst.afw.table table object by getting an astropy.table.Table version of it: src.asAstropy() The returned Astropy Table is a view, not a copy, so it doesn’t consume much additional memory. Plotting sources on the display¶ Now you’ll overplot sources from the src table onto the image display using the Display’s dot method for plotting markers. Display.dot plots markers individually, so you’ll need to iterate over rows in the SourceTable. It’s more efficient to send a batch of updates to the display, though, so enclose the loop in a display.Buffering context, like this: with display.Buffering(): for s in src: display.dot("o", s.getX(), s.getY(), size=10, ctype='orange') Now orange circles should appear in the DS9 window over every detected source. Note Notice the getX and getY methods for getting the (x,y) centroid of each source. These methods are shortcuts, using the table’s slot system. Because the the src catalog contains measurements from several measurement plugins, slots are a way of easily using the pre-configured best measurements of a source. Clearing markers¶ Display.dot always adds new markers to the display. To clear the display of all markers, use the erase method: display.erase() Selecting PSF-fitting sources to plot on the display¶ Next, use the display to understand what sources were used for PSF measurement. The src table’s calib_psf_used column describes whether the source was used for PSF measurement. Since columns are Numpy arrays we can iterate over rows where src['calib_psf_used'] is True with Numpy’s boolean array indexing: with display.Buffering(): for s in src[src['calib_psf_used']]: display.dot("x", s.getX(), s.getY(), size=10, ctype='red') Red x symbols on the display mark all stars used by PSF measurement. Some sources might be considered as PSF candidates, but later rejected. In this statement, you can use a logical & (and) operator to combine boolean index arrays where both src['calib_psf_candidate'] is True and src['calib_psf_used'] == False as well: rejectedPsfSources = src[src['calib_psf_candidate'] & (src['calib_psf_used'] == False)] with display.Buffering(): for s in rejectedPsfSources: display.dot("+", s.getX(), s.getY(), size=10, ctype='green') Now all green plus (+) symbols on the display mark rejected PSF measurement sources. The display framework, as you’ve seen, is a useful facility for inspecting images and tables. This tutorial only covered the framework’s basic functionality. Explore the display framework documentation to learn how to display multiple images at once, and to work with different display backends. Wrap up¶ In this tutorial you’ve worked with the LSST Science Pipelines Python API to display images and tables. Here are some key takeaways: - Use the lsst.daf.persistence.Butlerclass to read and write data from repositories. - The lsst.afw.displaymodule provides a flexible framework for sending data from LSST Science Pipelines code to image displays. You used the DS9 backend in this tutorial, but other backends are available. - Exposure objects have image data, mask data, and metadata. When you display an exposure, the display framework automatically overlays mask planes. - Tables have well-defined schemas. Use methods like getSchemato understand the contents of a table. You can also use the asAstropytable method to view the table as an astropy.table.Table. Continue this tutorial series in part 4, where you’ll coadd these processed images into deeper mosaics.
https://pipelines.lsst.io/getting-started/display.html
CC-MAIN-2020-16
refinedweb
1,829
56.35
Subject: Re: [boost] [Review] Formal Review: Boost.Move From: Steven Watanabe (watanabesj_at_[hidden]) Date: 2010-05-22 19:17:58 AMDG Terry Golubiewski wrote: > I agree that boost::rv<> should not be in the move_detail namespace. > I found using the macros to be annoying and did not use them. > I did add... > > typedef boost::rv<T>& rv_ref; > > ... to my movable classes for convenience. > I never felt a need for const_rv_ref though. The main reason for using the macros is to get an automatic upgrade to real rvalue references when they are available. In Christ, Steven Watanabe Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2010/05/166940.php
CC-MAIN-2020-50
refinedweb
118
71.21
A nice and customizable cronfield Project description A nice and customizable cron field with great, easy to use UI. Features - Cron widget providing nice gentle select UI - Cron format validation - Custom django field - Ability to specify a daily run limit Requirements Fancy cron field requires Django version 1.11 up to 2.2, Python 3.6, 3.7, 3.8, and python-crontab 1.9.5. Installation python -m pip install django-fancy-cronfield-alt Basic usage Add ‘fancy_cronfield’ to your INSTALLED_APPS, then use CronField like any regular model field: from django.db import models from fancy_cronfield.fields import CronField class MyModel(models.Model): timing = CronField() Credits - django-fancy-crontab was created by @saeedsq. - Crontab API features borrowed from python-crontab. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-fancy-cronfield-alt/
CC-MAIN-2022-33
refinedweb
150
51.95
marble #include <FileStorageWatcher.h> Detailed Description Definition at line 22 of file FileStorageWatcher.h. Constructor & Destructor Documentation Definition at line 37 of file FileStorageWatcher.cpp. Definition at line 52 of file FileStorageWatcher.cpp. Member Function Documentation Add bytes to the current cache size. So FileStorageWatcher is aware of the current cache size. Definition at line 70 of file FileStorageWatcher.cpp. Definition at line 56 of file FileStorageWatcher.cpp. Getting the current size of the data stored on the disc. Definition at line 101 of file FileStorageWatcher.cpp. Stop doing things that take a long time to quit. Definition at line 96 of file FileStorageWatcher.cpp. Setting current cache size to 0. Definition at line 81 of file FileStorageWatcher.cpp. Sets the limit of the cache in bytes. Definition at line 61 of file FileStorageWatcher.cpp. Updates the name of the theme. Important for deleting behavior. - Parameters - Definition at line 87 of file FileStorageWatcher.cpp. Is emitted when a variable has changed..
https://api.kde.org/4.14-api/kdeedu-apidocs/marble/html/classMarble_1_1FileStorageWatcherThread.html
CC-MAIN-2019-47
refinedweb
161
55.4
Parse::Gnaw - Write extensible, recursive, grammars in pure perl code (grammar rules are perl arrays) and apply them to whatever parsee you want. Write extensible, recursive, grammars using pure perl code. Grammar rules are perl arrays. Apply them to whatever parsee you want. Normal parsees would be strings. Interesting parsees might be a three-dimensional array of characters. no strict 'vars'; use Parse::Gnaw; use Parse::Gnaw::String; rule('SayHello', 'Hello', 'World'); my $string=Parse::Gnaw::String->New('So Hello World of mine'); $string->parse('SayHello'); This is the second generation of Parse::Gnaw starting from revision 0.600. Gen1 stored rules as code references and that prevented recursive calls within a rule as calling the code ref for the rule would go into an infinite loop. Gen2 uses array references to store rule, with the name of the array reference variable matching the name of the rule. our $rulename = [ .... rule content .... ]; It should allow recursive rules, although it will probably get hung in an infinite loop trying to match a left recursive rule. Before you can parse anything, you have to create a grammar. Grammars are created with the "rule" subroutine, which is imported when you use Parse::Gnaw. # see t/doc_ex_rule_hi.t use Parse::Gnaw; rule('SayHello', 'H', 'I'); This will create a package scalar in your current package. The name of the scalar will be the name of the rule. The scalar will be a reference to an array that contains the rule. You can treat it like any other perl variable. print Dumper $SayHello; This will print out something like: ' } ] ]; The array shows three elements. The first is a "rule" which defines the name of the rule and also holds extra information about the rule. The next two elements are literals looking for 'H' and then 'I'. A grammar is half of the puzzle. You also need to create the thing you want to parse. A simple example is a string: # see t/doc_ex_string_dog.t use Parse::Gnaw::LinkedListDimensions1; my $ab_string=Parse::Gnaw::LinkedListDimensions1->new("dog"); $ab_string->display(); What this does is take the string 'dog' and turn it into a linked list that can be parsed. Because Data::Dumper() does not handle linked lists well (they do not display in an easy-to-read format), the display() method was created. It will output a Parse::Gnaw string-ish object of some kind in a more readable format Dumping LinkedList object LETPKG => Parse::Gnaw::Blocks::Letter # package name of letter objects CONNMIN1 => 0 # max number of connections, minus 1 HEADING_DIRECTION_INDEX => 0 HEADING_PREVNEXT_INDEX => 0 FIRSTSTART => letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa08c820) payload: 'FIRSTSTART' from: unknown connections: [ ........... , ........... ] LASTSTART => letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa18d70c) payload: 'LASTSTART' from: unknown connections: [ ........... , ........... ] CURRPTR => letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa08c820) payload: 'FIRSTSTART' from: unknown connections: [ ........... , ........... ] letters, by order of next_start_position() letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa252d2c) payload: 'd' from: file t/doc_ex_string_dog.t, line 22, column 0 connections: [ ........... , (0xa252de0) ] letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa252de0) payload: 'o' from: file t/doc_ex_string_dog.t, line 22, column 1 connections: [ (0xa252d2c) , (0xa252ef8) ] letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa252ef8) payload: 'g' from: file t/doc_ex_string_dog.t, line 22, column 2 connections: [ (0xa252de0) , ........... ] letterobject: Parse::Gnaw::Blocks::Letter=ARRAY(0xa18d70c) payload: 'LASTSTART' from: unknown connections: [ ........... , ........... ] Now that you have a Grammar and a Grammee, you can parse. The parse() method is something that Parse::Gnaw::LinkedList type objects have available. It takes in one argument, a string containing the name of the top level rule or grammar that you want to apply to the string. If the rule matches the string, parse() will return true 1. If the rule does NOT match the string, parse() will return false ''. $string->parse('rulename'); The parse() method is used for parsing an an entire string from the beginning. It is similar to putting ^ or \A at the front of a regular expression: m/^(rule)/ or m/\A(rule)/ Here's a full example of parsing a string: # see t/doc_ex_rule_and_string.t use Parse::Gnaw; use Parse::Gnaw::LinkedListDimensions1; # A Simple Rule Example rule( 'rule1', 'H', 'I' ); # A simple string example my $histring=Parse::Gnaw::LinkedListDimensions1->new("HI THERE"); ok($histring->parse('rule1'), "This is like regex 'HI THERE' =~ m/HI/ "); The rule function is used to create rules. Rules are created as package scalar in caller's namespace. The name of the scalar is the name of the rule. package main; rule( 'rule1', 'H', 'I' ); The above example will create a rule called "main::rule1". You can call Data::Dumper on $rule1 and see that it is an array reference. Rules by themselves don't match anything in a string or block of text. Rules are just a way to handle a grammar in managable chunks. They could be thought of as similar to a perl subroutine, a container for the code that does something. The first parameter is a string with the name of the rule. Everything after that defines what the rule does. These can be string literals or character classes or alternations or quantifiers, and so on. Another thing you can do inside a rule is call another rule. This rule rule('rule1','H','I'); turns into a $rule1 scalar holding a reference to an array. ' } ] ]; You may have noticed that the rule array is just an array of smaller arrays. These smaller arrays are created by the subrules passed into rule(). For example, a lit() subrule: lit('H') might create a subarray that looks like this: [ 'lit', 'H', { 'methodname' => 'lit', 'filename' => 't/doc_ex_rule_hi.t', 'linenum' => 18, 'payload' => 'H', 'package' => 'main' } ], All subrule functions return a subarray of this format so that the rule() function can easily handle them. The first element in the subarray is the method name associated with the subrule. In the above example, we used the lit() function to create the literal subarray. When the rule1 gets parsed, it will see this ['lit', 'H', {}] array and call the 'lit' *method*. Because the first element will be used as a methodname, the first element is always a string. The second element in the subarray is the payload. The payload is whatever is the important bit of information for the method. For a lit(), the important information is the actual literal you're looking for, such as 'H' above. i.e. a capital letter H. Some payloads are more complicated. For a character class, the payload is a hash reference, and the keys are the different letters in the character class. The third element is a hash reference that contains all the information for the subrule, including stuff that is only used for error reporting. For example, if a subrule throws a die"" while parsing a string, it would be helpful to know where the original subrule was defined and put that in the error message. Therefore, a number of entries in the hashref is location information as to where the subrule was originally defined in the code. In the above example, if we went to file t/doc_ex_rule_hi.t line number 18, we should see something that looks like: rule(...., lit('H'), ... ); or possibly just rule(..., 'H', ... ); If while parsing a string, an error occurs while looking for this lit('H'), the hashref contains the information needed to point back to where the original subrule was declared. The lit() function in Parse::Gnaw returns an array where the first element is 'lit'. When parsing a rule, the parser will take 'lit' and call that method on the text object being parsed. This does get a little confusing from time to time. The lit() function is contained in the Parse::Gnaw package. The 'lit' method is contained in Parse::Gnaw::Blocks::ParsingMethods package. And the Parse::Gnaw::Blocks::ParsingMethods package is a child package of Parse::Gnaw::LinkedList. The Parse::Gnaw::LinkedList package is the base package for defining any string/text object that you want to parse. As a string is parsed, the subarrays in the rule array is iterated through and whatever method is contained in the subrule is called. There is a subrule function defined in Parse::Gnaw for defining rules. There is a subrule method defined in Parse::Gnaw::Blocks::ParsingMethods for parsing the string. One reason for this split is because the way the subrule is defined in the rule is usually the same regardless of what kind of string we're parsing. But depending on what kind of string we're parsing, we might have to handle the subrule method differently. If you're just starting to use Parse::Gnaw, this split between rule/function and string/method won't stop you from using the module. But if you want to do more advanced things with the package, like create your own subrule, then you'll need to know that you need to create a subrule function and a string method. The next set of documentation covers the subrule functions that can be called to define a rule. The subrule returns an array reference which is then passed into the rule function. rule('rulename', subrule(..blah..) ); The subrule functions are defined in Parse::Gnaw. For every subrule, there is some corresponding method defined in Parse::Gnaw::Blocks::ParsingMethods. Use the 'call;' subroutine to have one rule call another rule. rule( 'rule1', 'a', 'b'); rule( 'rule2', 'c', call('rule1') ); Note: if you call a rule that doesn't exist, script will throw a warning. You can pre-declare a rule with the predeclare() function: predeclare('rule1'); rule( 'rule2', 'c', call('rule1') ); rule( 'rule1', 'a', 'b'); Recursive calls currently work as long as some text in the string is consumed before making the recursive call. This will work fine: rule( 'myrule', 'a', call('myrule') ); The above example will work fine because it has to match something (the literal 'a') before it recursively calls itself ('myrule'). However, this example below will compile, but will get stuck in an infinite loop if you try to parse with the rule: rule( 'myrule', call('myrule'), 'a'); The last example above won't work because the first thing 'myrule' does is call 'myrule' again, which then wants to call 'myrule', which then wants to call 'myrule'. The parser currently doesn't detect this is happening, and so your code will get infinite recursion until the stack crashes. When declaring rule1 that calls rule2, and you haven't yet declared rule2, you will get a warning message about the rule not existing. You can ignore this warning as long as you declare rule2 before you start parsing. But if you want to supress the warning, use predeclare() and pass in the name of the rule you want to predeclare. predeclare('rule1'); rule( 'rule2', 'c', call('rule1') ); ... later ... rule( 'rule1', 'a', 'b'); Pass the lit() function a string containing the literal value you want to match. rule( 'greeting', lit('hello') ); As a shorthand, any string passed into rule() will be assumed to be a lit(). rule( 'greeting', 'hello' ); Note that 'greeting' is the name of the rule looking for a literal 'hello'. Call this and pass in a string defining a character class. cc('aeiou'); This is like [aeiou] in perl regular expressions. Call this and pass in a string defining an inverted character class. notcc('aeiou'); This is like [^aeiou] in perl regular expressions. The alt() function is for defining grammars that contain alternations or alternatives. The rule 'fruit' might be a choice between 'banana', 'apple', and 'orange'. The three possible choices are an alternation. rule('fruit', alt([apple'], ['banana'], ['orange'])); In a perl regular expression, this might look like: m/apple|banana|orange/ The problem is that we can't use pipe '|' as a separator. So, instead, we have to use array references to bundle the different alternatives. It's a bit more typing, but we need some way to associate different pieces of alternatives, because most alternatives won't be just alternatives of just one word. rule('greetings', alt(['howdy','partner'], ['hello', 'friend'], ['hey', 'sport'])); In the 'greetings' example, the only way to know which literals are bundled together is to put them in array references. With the array references acting to bundle the alternatives, the rule is functionally equivalent to the following regexp: m/(howdy partner)|(hello friend)|(hey sport)/ without the array references, we might assume each individual word is an alternative, leading to a regexp that might look like this: m/howdy|partner|hello|friend|hey|sport/ The alt() function will create rules for each alternative which will follow the pattern "alternate_" followed by an integer. Quantifier. Pass in a series of subrules to thrifty and it will attempt to match that series as defined by the last entry in the elements passed into the function call. A perl regular expression /(abc)+/ becomes thrifty('a', 'b', 'c', '+'); All arguments but the last one are essentially put in parenthesis and associated with the quantity specifier i.e. /(abc)+/ becomes thrifty('a','b','c','+') Note the only quantifier mode supported is thrifty. Parse::Gnaw does not support greedy quantifiers. Here is a list of ways you can define the last element passed into thrifty: thrifty( ... , [3,9] ); 3 to 9 thrifty( ... , [3,] ); 3 or more thrifty( ... , [,9] ); 0 to 9 thrifty( ... , '3,9' ); 3 to 9 thrifty( ... , '3,' ); 3 or more thrifty( ... , ',9' ); 0 to 9 thrifty( ... , '3' ); 3, no more, no less thrifty( ... , '+' ); 1 or more thrifty( ... , '*' ); 0 or more thrifty( ... , '?' ); 0 or 1 Note that there is more than one way to express the min/max pair. '3', could also be specified as '3,3' as well as [3,3]. The thrifty function depends greatly on the internal 'fragment_a_rule' function. Internal subroutine. This processes the various ways to call the various Parse::Gnaw functions and fills in the pieces the caller doesn't pass in. Should always return a hash reference will all info filled in. rule('rulename', ... ); rule('rulename', {ruleinfo}, ... ); lit('literalvalue'); thrifty({quantifierinfo}, ...); The purpose of the function is to support all the above forms of calling Parse::Gnaw functions, extract the information regardless of the format, and return a generic hashref of information that can be used by any function. Internal subroutine. Used to break up a rule into pieces so that a quantifier can operate correctly. the rest of the code in this subroutine is to "reorder" the grammar. for example, this grammar: rule1 : 'a' rule2 'b' rule2 : 'c' thrifty('d') 'e' needs to rearrange the thrifty so that it can try to match a number of 'd' then it has to match 'e', then it has to match 'b' from the previous rule. if the thrifty quantifier fails, it has to try to match another 'd', then match 'e', then match 'b' from the previous rule. This can't be done treating each rule as a subroutine/function as they appear because a quantifier can't return after it's matched 'd'. it has to match 'd', then match anything in the grammar anywhere in the grammar that occurs after it, and THEN it can return. The way we're going to do this is by fragmenting/chopping up the rules any time we have a CALL or QUANTIFIER (quantifiers are actually calls) we are going to take everything AFTER THE CALL, and put it in its own rule fragment. the original call gets modified with a thencall=>rulefragment added to it. for example: rule1 : 'a' call('rule2') 'c' qty(thrifty1) 'e' rule2 : 'b' thrifty1 : 'd' we need to fragment rule1 rule1 : 'a' call('rule2') 'c' qty(thrifty1) 'e' It can be viewed as getting fragmented as follows: rule1 : 'a' call('rule2') [ 'c' qty(thrifty1) ['e']] ^frag1 ^frag2 therfore it becomes rule1 : 'a' call('rule2',thencall=>rule1frag1) rule1frag1 : 'c' call('thrifty1', thencall=>rule1frag2) rule1frag2 : 'e' this will allow all calls and quantifiers to treat the rest of the grammar after the call/quantifier as if it were part of a nested function call. The "thrifty" call doesn't return until it matches all the way to the end of the grammar, therefore, everything after the thrifty call needs to be treated as part of the thrifty function call. in the above example, when we call rule 'thrifty1', we also pass in the fact that the rule after that is 'rule1frag2' this means 'thrifty1' can match 1 'd', and then call rule1frag2 to see if the rest of the grammar matches. if it fails, we can trap the failure in the 'thrifty1' call, and then we can try to match another 'd', and then try calling rule1frag2 again to see if the rest of the rule matches THAT. Explaining it from another angle, fragment_a_rule() breaks up the rules to rearrange the *associativity* while maintaining the same functionality. The original rule: rule1 : 'a' rule2 'b' rule2 : 'c' thrifty('d') 'e' Could be flattened to look like this: rule1: 'a' ('c' thrifty('d') 'e') 'b' However the quantifier will not operate with the above associativity. What fragment_a_rule() does is break up the rule into fragments and rearrange the associativity so that the rule can be reformed as function calls. The original rules look like this: rule1 : 'a' rule2 'b' rule2 : 'c' thrifty('d') 'e' And the fragmentation turns it into this: rule1 : 'a' call('rule2',thencall=>'b') rule2 : 'c' call(('d', {thrifty}), thencall=> 'e') The 'b' and 'e' fragments in the "thencall" sections get turned into their own rule fragments, with their own rulenames. And the "thrifty" because a call() that passes in {thrifty} specific information in a hashref. The calls then can be handled like subroutines, with extra information being passed in, such as the {thrifty} information and the "thencall" rule name. For a third example, imagine this rule: rule( 'myrule', thrifty('a','+'), thrifty('b','+'), 'c'); And then imagine trying to apply the above rule to the following string: "abbbc" The rule would be fragmented to look more like this: myrule : call('a',{thrifty,'+'}, thencall=>('b',{thrifty,'+'}, thencall=>'c')); The first thrifty 'a' in the rule would match the first 'a' in teh string: "(a(aaab" THe rule would thencall the second fragment, which would start with the thrifty 'b'. This would match the first 'b' in the string. "(a(b(bbc" The rule would thencall the 'c' rule, which would fail. THe thrifty 'b' would then expand to include the second 'b' in the string. "(a(bb(bc" The thrifty 'b' would thencall the 'c' fragment again, which would fail. The thrifty 'b' would then expand to include the third 'b' in the string. "(a(bbb(c" The thrifty 'b' would thencall=> the 'c' fragment, which this time would match. "(a(bbb(c" The 'c' thencall rule would return successfully "(a(bbb(c)" The thrifty 'b' would return successfully "(a(bbb(c))" And then the thrifty 'a' would return successfully "(a(bbb(c)))" At which point, we've returned all the way to the very first rule, therefore the parse matched and succeeded. So, while the original rule might look at the matches like this: "(a)(bbb)(c)" The above associativity doesn't allow the rules to be handled like subroutines. If we fragment the rules and change the associativity to this: "(a(bbb(c)))" Then the rules match the flow of a subroutine, and we can parse a rule simply by treating each rule call() as a subroutine call. Internal subroutine. given a hash created by process_first_arguments_and_return_hash_ref(), extract all location information and copy it to a newly created hash. Internal subroutine. Pass in a string. Will call eval("") on it. If you want the eval to return a value, assign it to a special variable $eval_return. The value of $eval_return will be returned by eval_string() function. Internal subroutine. returns a string that will be used to fragment any rules. When a rule is fragmented, the fragments are named originalrule.fragment_suffix().integer_counter Call this subroutine to return the string value for fragment_suffix(). Internal subroutine. Used to get a reference to the rulebook in the caller's package. All rules for a package are placed into a package variable called (packagename)::rulebook. This variable is a hash reference where the keys are the names of the rules and the data is an array reference for each rule. Internal subroutine. Used to get a reference to a specific rulename in the caller's package. Each rule generated for a package is placed into the package as a scalar containing an array reference. The array reference contains the rule information needed to parse a string. Internal subroutine. Formats the package name into consistent string. Internal subroutine. Formats the filename into a consistent string. Internal subroutine. Formats the line number into a consistent string. Please report any bugs or feature requests to bug-parse-Gnaw at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. You can find documentation for this module with the perldoc command. perldoc Parse::Gn.
http://search.cpan.org/dist/Parse-Gnaw/lib/Parse/Gnaw.pod
CC-MAIN-2015-06
refinedweb
3,493
62.38
Posts Tagged ‘web services’ Building Better Web Services With Django (Part 2) In the first part I talked about using the Content-Type and Accept HTTP headers to allow a single website to be use both by humans and programs. In the previous part I gave a decorator which can be used to make working with JSON very easy. For our use though this isn’t great because a view decorated in this way only accepts JSON as the POST body and only returns JSON, regardless of the HTTP headers. The decorator given below relies on a django snippet to decode the Accept header for us so don’t forget to added it to your middleware. def content_type(func, common=None, json_in=None, json_out=None, form_in=None): def wrapper(req, *args, **kwargs): # run the common function, if we have one if common is not None: args, kwargs = common(req, *args, *kwargs), {} if isinstance(args, HttpResponse): return args content_type = req.META.get("content_type", "") if content_type == "application/json": args, kwargs = json_in(req, json.loads(req.raw_post_data), *args, *kwargs), {} elif content_type == "application/x-www-form-urlencoded": args, kwargs = json_in(req, req.POST, *args, *kwargs), {} else: return HttpResponse(status=415, "Unsupported Media Type") if isinstance(args, HttpResponse): return args for (media_type, q_value) in req.accepted_types: if media_type == "text/html": return func(req, args, kwargs) else: r = json_out(req, args, kwargs) if isinstance(r, HttpResponse): return r else: return HttpResponse(json.dumps(r), mimetype="application/json") return func(req, args, kwargs) return wrapper So, how can we use this decorator? Let’s imagine we’re creating a blog and we have a view which displays a post on that blog. If they user posts it should create a new comment. Firstly we create a function, common, which gets the blog object and returns a 404 if it doesn’t exist. The return of this function is passed onto all other functions as their arguments. def common(req, blog_id): try: return (get_post_by_id(int(blog_id)), ) except ValueError: return HttpResponse(status=404) Next we write two functions to handle the cases where the users POSTs a form encoded body, or some JSON. The return values of these functions are passed onto the chosen output function as the arguments. def json_in(req, json, blog_post): # process json return (blog_post ,) def form_in(req, form, blog_post): # process form return (blog_post, ) The JSON output function doesn’t need to return an HttpResponse object like a normal Django view because the output is automatically encoded as a string and wrapped in a response object. def json_out(req, blog_post): return blog_post.to_json() Finally we come to the HTML output function. This function is also called if not mime type in Accept is suitable. @content_type(common=common, json_in=json_in, json_out=json_out, form_in=form_in) def blog_post(req, blog_post): return render_to_template("post.html", {"post": blog_post}) This decorator is really little more than a sketch. Many more content types could be supported, but hopefully it gives a good example of how you can write a very flexible webservice and still reduce code duplication as much as possible. Building Better Web Services With Django (Part 1) Building such as text/html, application/json or application/x-www-form-urlencoded. A content type is sent by the client when POSTing or PUTing data, and whenever the webserver includes some data in its response. The Accept header is sent by a client to specify what content types it can accept in the response. This header has a more complicated format that Content-Type because it can used to specify a number of different content types and to give a weighting to each. When combined these two headers can be used to allow a normal user to browse the site and to allow a robot to make api calls on the same site, using the same urls. This makes it easier both for the creator of the programmer accessing your site and for you because you can easily share code between the site and your api. I’m going to outline a decorator that will let write a webservice such as this, that will support HTML and JSON output, and JSON and form encoded data as inputs. First we’ll create a decorator that parses any post data as JSON and passes it the view as the second parameter (after the request object). It will also JSON encode any return value that’s not an HTTPResponse object. import simplejson as json from django.http import HttpResponse def json_view(func): def wrap(req, *args, **kwargs): try: j = json.loads(req.raw_post_data) except ValueError: j = None resp = func(req, j, *args, **kwargs) if isinstance(resp, HttpResponse): return resp return HttpResponse(json.dumps(resp), mimetype="application/json") return wrap This decorator should be pretty easy follow, but here is an example to illustrate its use. @json_view def view(req, json, arg1, arg2): obj = get_obj(arg1, arg2) if req.method == "POST" and json is not None: # process json here return {"status": "ok"} else: return {"status": "failed"} This really cuts down on the code you need to write, but this view only handles JSON as its input and output. Next we need to parse the Accept headers and return an ordered list of content types so we can choose the preferred option. No need to reinvent the wheel, so we just pull some code from djangosnippets.org. All the parts are in place now, and in my next post we’ll create a decorator which takes these parts ands puts them together.
https://andrewwilkinson.wordpress.com/tag/web-services/
CC-MAIN-2017-17
refinedweb
914
51.07
CAMShift - automated blob 'grabbing' Hello, I've been playing with the CAMshift example but I am struggling to get it to track a ball. I grab all circular blobs, I check the Radius is within sane size and I return the bounding box (minX, minY..). I check the image to see that it has grabbed the blob, it has but I can't get it to 'track' this ball. If i detect every second, I can detect/track but I can't get .track to work :-( from SimpleCV import * import time def cam(): cam = Camera() bb1 = None while bb1 is None: img = cam.getImage() bb1 = getBB(img) fs1=[] while True: try: img1 = cam.getImage() img1 = img1.colorDistance(SimpleCV.Color.WHITE).dilate(3) fs1 = img1.track("camshift",fs1,img,bb1) fs1.drawBB(color=Color.RED) print fs1[-1].getBB() img1.show() except KeyboardInterrupt: break def getBB(img): found = "false" nimg=img nimg = img.colorDistance(SimpleCV.Color.WHITE).dilate(3) blobs = nimg.findBlobs() circles = blobs.filter([b.isCircle(1) for b in blobs]) firstblob = None for b in circles: #print "got a blob" if int(b.radius()) > 15 : found = "true" if found == "true" : firstblob = b xmin = firstblob.minX() ymin = firstblob.minY() xmax = firstblob.maxX() ymax = firstblob.maxY() #nimg.drawCircle((b.x, b.y), b.radius(),SimpleCV.Color.BLUE,4) #nimg.show() #time.sleep(5) return (xmin,ymin,xmax-xmin,ymax-ymin) else: return None cam() Please advise, if I can't automate this process... I can only select by index of got blobs and this makes it very difficult to do anything worthwhile. Essentially... I am trying to track a tennis ball in slow motion, I'm struggling to understand why this is failing - I get the red boundingBox to pop up on one frame, no more. I would understand if the same process used in getBB DID NOT detect the blob, but this doesn't make too much sense to me. Please advise!
http://help.simplecv.org/question/956/camshift-automated-blob-grabbing/
CC-MAIN-2019-35
refinedweb
322
67.45
Set the remote node attribute in a spawn attributes object #include <spawn.h> int posix_spawnattr_setnode( posix_spawnattr_t *attrp, uint32_t node); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The posix_spawnattr_setnode() function sets the remote node attribute in the spawn attributes object referenced by attrp. You must have already initialized the spawn attributes object by calling posix_spawnattr_init(). This attribute specifies the descriptor of the node on which the child process is to be spawned if POSIX_SPAWN_SETND is set in the spawn flags; to set this flag, call posix_spawnattr_setxflags(). By default, the child is spawned on the node on which you call posix_spawn(). Use the netmgr_strtond() function to obtain a valid node identifier for a named remote node. To retrieve the value of this attribute, call posix_spawnattr_getnode(). For more information about spawn attributes, see the entry for posix_spawn().
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/posix_spawnattr_setnode.html
CC-MAIN-2018-13
refinedweb
146
56.76
Contents Meta — * still in an early stage — * still in an early stage This is an old revision of the document! When making a design decision based on principles, it is necessary to find those principles which fit to the given design problem. This means the designer has to figure out which aspects need consideration. Seasoned designers will already know that by experience but there is also some guidance for that task. Principle languages interconnect principles in a way that the consideration of one principle automatically leads to other principles which are likely to be relevant in the same design situations. They point to other aspects to consider (complementary principles), to possibly downsides (contrary principles), and to principles of different granularity which might fit better to the given problem (generalizations and specializations). The following approach is how you find a characterizing set for a given design problem. Remarks: The following example shows the usage of the OOD Principle Language. It details the assessment of a solution found in the CoCoME system1). The details of the system are irrelevant here but it resembles an information system which can be found in supermarkets or other stores. There are several components which are grouped into the typical layers of an information system: The presentation layer (GUI), the application or business logic layer and the data layer. In CoCoME there is a mechanism for getting access to other components. In a nutshell it works like this: DataImplwhich aggregates three subcomponents Enterprise, Persistence, and Storeand gives access to them. public class DataImpl implements DataIf { public EnterpriseQueryIf getEnterpriseQueryIf() { return new EnterpriseQueryImpl(); } public PersistenceIf getPersistenceManager() { return new PersistenceImpl(); } public StoreQueryIf getStoreQueryIf() { return new StoreQueryImpl(); } } public class DataIfFactory { private static DataIf dataaccess = null; private DataIfFactory() {} public static DataIf getInstance() { if (dataaccess == null) { dataaccess = new DataImpl(); } return dataaccess; } } Essentially DataIfFactory resembles a mixture between the design patterns factory and singleton. The latter one is important here. The purpose of a singleton is to make a single instance of a class globally accessible. Here DataImpl is not ensured to be only instantiated once as it still has a public constructor. Nevertheless the “factory” class makes it globally accessible. In every part of the software DataIfFactory.getInstance() can be used to get hold of the data component. And since DataIf makes the three subcomponents accessible, also these are accessible from everywhere. There is no need to pass a reference around. Is this a good solution? We will examine this question using the OOD principle language. First we have to find suitable starting principles. This is one of the rather sophisticated cases where finding a starting principle is at least not completely obvious. If we don't have a clue where to start, we'll have a look at the different categories of principles in the language. Essentially the “factory” enables modules to access and communicate with each other. So we are looking for principles about module communication. There are three of them in the principle language: TdA/IE, LC, and DIP. TdA/IE does not seem to fit, but LC seems to help. Coupling should be low and the mechanism couples modules in a certain way. So we'll choose LC as a starting principle and our characterizing set looks like this: {LC}. Now we'll have a look at the relationships. LC lists KISS, HC, and RoE as contrary, TdA/IE, MP, and IH/E as complementary, and DIP as a specialization. Let's examine them: DataIfFactory.getInstance().getStore().doSomething() So up until now the characterizing set is {LC, KISS, RoE, TdA/IE}. Now let's examine the relationships of the newly added principles. KISS lists GP, ML and MP as contrary principles and MIMC as a specialization. Characterizing set up until now: {LC, KISS, RoE, TdA/IE, ML}. ML was newly added. Maybe on this point we might decide to abort the process because we already have a good idea of the aspects. But for the sake of the example, we'll continue with the relationships of ML. The wiki page lists KISS as a contrary principle and DRY, EUHM, UP, and IAP as specializations. As a result we get {LC, KISS, RoE, TdA/IE, ML} as the characterizing set. Note that although in this example the principles are examined in a certain order, the method does not prescribe any. In order to answer the above question, we have to informally rate the solution based on the principles of the characterizing set: Datacomponent possibly using another way of storing the data. Every change in the arrangement of the classes needs a change in the code. LC is rather against this solution. Storesubcomponent requires asking DataIfFactoryfor the Datacomponent and asking that one for the store. There is no way to tell the “factory” to do something. TdA/IE is against the solution. So LC, RoE and TdA/IE are against the solution, KISS thinks it's good and ML has nothing against it. As it is not the number of principles which is important, the designer still has to make a sound judgment based on these results. What is more important: Coupling, testability, and clarity or a simple and fast implementation. In this case we'd rather decide that the former are more important, so we should rather think about a better solution. In the next step we would think about better alternatives and might come up with dependency injection and service locators. So there are three alternatives (with several variations): The current solution and the two new ideas. We already constructed a characterizing set. So the only thing to do is to rate the ideas according to the principles: The current “factory” approach is abbreviated “F”, dependency injection is DI and SL stands for service locator. In the following a rough, informal rating is described, where “A > B” means that the respective principle rates A higher/better than B. “=” stands for equal ratings.
http://www.principles-wiki.net/about:navigating_principle_languages?rev=1379277599
CC-MAIN-2020-29
refinedweb
983
55.74
Apache HTTP Server Request Library #include "apr_file_io.h" #include "apr_buckets.h" #include "apreq.h" Go to the source code of this file. This header contains useful functions for creating new parsers, hooks or modules. It includes Concatenates the brigades, spooling large brigades into a tempfile (APREQ_SPOOL) bucket. Error status code resulting from either apr_brigade_length(), apreq_file_mktemp(), apreq_brigade_fwrite(), or apr_file_seek(). Copy a brigade. Error status code from an unsuccessful apr_bucket_copy(). Writes brigade to a file. Error status code from either an unsuccessful apr_bucket_read(), or a failed apr_file_writev(). Move the front of a brigade. Set aside all buckets in the brigade. Error status code from an unsuccessful apr_bucket_setaside(). Determines the spool file used by the brigade. Returns NULL if the brigade is not spooled in a file (does not use an APREQ_SPOOL bucket). Heuristically determine the charset of a string. APREQ_CHARSET_UTF8 if the string is a valid utf8 byte sequence; APREQ_CHARSET_LATIN1 if the string has no control chars; APREQ_CHARSET_CP1252 if the string has control chars. Convert a string from cp1252 to utf8. Caller must ensure it is large enough to hold the encoded string and trailing '\0'. Url-decodes a string. APR_INCOMPLETE if the string ends in the middle of an escape sequence. APREQ_ERROR_BADSEQ or APREQ_ERROR_BADCHAR on malformed input. Url-decodes an iovec array. APR_INCOMPLETE if the iovec ends in the middle of an escape sequence. APREQ_ERROR_BADSEQ or APREQ_ERROR_BADCHAR on malformed input. Url-encodes a string. Returns an url-encoded copy of a string. Makes a temporary file. Error status code from unsuccessful apr_filepath_merge(), or a failed apr_file_mktemp(). Search a header string for the value of a particular named attribute. APREQ_ERROR_NOATTR if the attribute is not found. APREQ_ERROR_BADSEQ if an unpaired quote mark was detected. Returns offset of match string's location, or -1 if no match is found. Join an array of values. The result is an empty string if there are no values. Places a quoted copy of src into dest. Embedded quotes are escaped with a backslash ('\'). Same as apreq_quote() except when src begins and ends in quote marks. In that case it assumes src is quoted correctly, and just copies src to dest. An in-situ url-decoder.
http://httpd.apache.org/apreq/docs/libapreq2/apreq__util_8h.html
CC-MAIN-2017-39
refinedweb
359
71.31
On Fri, 11 Jun 2010 15:49:54 -0700Salman <sqazi@google.com> wrote:> A program that repeatedly forks and waits is susceptible to having the> same pid repeated, especially when it competes with another instance of the> same program. This is really bad for bash implementation. Furthermore,> many shell scripts assume that pid numbers will not be used for some length> of time.> > Race Description:>> ...>> diff --git a/kernel/pid.c b/kernel/pid.c> index e9fd8c1..fbbd5f6 100644> --- a/kernel/pid.c> +++ b/kernel/pid.c> @@ -122,6 +122,43 @@ static void free_pidmap(struct upid *upid)> atomic_inc(&map->nr_free);> }> > +/*> + * If we started walking pids at 'base', is 'a' seen before 'b'?> + */> +static int pid_before(int base, int a, int b)> +{> + /*> + * This is the same as saying> + *> + * (a - base + MAXUINT) % MAXUINT < (b - base + MAXUINT) % MAXUINT> + * and that mapping orders 'a' and 'b' with respect to 'base'.> + */> + return (unsigned)(a - base) < (unsigned)(b - base);> +}pid.c uses an exotic mix of `int' and `pid_t' to represent pids. `int'seems to preponderate.> +/*> + * We might be racing with someone else trying to set pid_ns->last_pid.> + * We want the winner to have the "later" value, because if the> + * "earlier" value prevails, then a pid may get reused immediately.> + *> + * Since pids rollover, it is not sufficient to just pick the bigger> + * value. We have to consider where we started counting from.> + *> + * 'base' is the value of pid_ns->last_pid that we observed when> + * we started looking for a pid.> + *> + * 'pid' is the pid that we eventually found.> + */> +static void set_last_pid(struct pid_namespace *pid_ns, int base, int pid)> +{> + int prev;> + int last_write = base;> + do {> + prev = last_write;> + last_write = cmpxchg(&pid_ns->last_pid, prev, pid);> + } while ((prev != last_write) && (pid_before(base, last_write, pid)));> +}<gets distracted>hm. For a long time cmpxchg() wasn't available on all architectures. That _seems_ to have been fixed.arch/score assumes that cmpxchg() operates on unsigned longs.arch/powerpc plays the necessary games to make 4- and 8-byte scalars work.ia64 handles 1, 2, 4 and 8-byte quantities.arm handles 1, 2 and 4-byte scalars.as does blackfin.So from the few architectures I looked at, it seems that we do indeedhandle cmpxchg() on all architectures although not very consistently. arch/score will blow up if someone tries to use cmpxchg() on 1- or2-byte scalars.<looks at the consumers>infiniband deos cmpxchg() on u64*'s, which will blow up on manyarchitectures.Using grep -r '[ ]cmpxchg[^_]' . | grep -v /arch/I can't see any cmpxchg() callers in truly generic code. lockdep andkernel/trace/ring_buffer.c aren't used on the more remotearchitectures, I think.Traditionally, atomic_cmpxchg() was the safe and portable one to use.
http://lkml.org/lkml/2010/6/14/477
CC-MAIN-2013-20
refinedweb
442
68.57
Currently, I am working on writing an open source SIP stack and TAPI interface based on RFC 3261. Although the .NET framework's network classes are surprisingly complete when it comes to familiar formats like HTTP or even Gopher, formats like SIP and MailTo are not supported, forcing me to implement them on my own. In good object oriented style, Microsoft allows you to modify existing formats, or even create your own, so adding support for a given scheme should be trivial, right? Well, Microsoft hasn't gotten around to documenting much of the UriParser classes. After hours of experimentation and some help from Jason Kemp, I have figured out exactly how to extend and register a UriParser, allowing your program to understand URIs from any scheme. This article will show you how to write your own UriParser in an attempt to fill the void of documentation. UriParser UriParser is an abstract class that provides some methods for parsing a URI. Some callbacks are included also: whenever a Uri is created, all registered UriParsers are notified, for example. If the URI that you need to parse closely resembles a scheme that is already supported, it may benefit you to extend that UriParser. For most purposes, extending GenericUriParser is the best choice, because the constructor allows you to choose certain options regarding how things are parsed. Here is a skeleton class that explains the most important methods you may need to override: Uri GenericUriParser public class SipStyleUriParser : GenericUriParser { //You may want to have your constructor do more, but it usually //isn't necesssary. See the MSDN documentation for //GenericUriParserOptions for a full explanation of what it does. //Basically it lets you define escaping rules and the presence of //certain URI fields. public SipStyleUriParser(GenericUriParserOptions options) : base(options) { } protected override void InitializeAndValidate(Uri uri, out UriFormatException parsingError) { //This function is called whenever a new Uri is created //whose scheme matches the one registered to this parser //(more on that later). If the Uri doesn't meet //certain specifications, set parsingError to an appropriate //UriFormatException. } protected override void OnRegister(string schemeName, int defaultPort) { //This event is fired whenever your register a UriParser //(more on that later). The only use I can think of for this //is storing the default port when a UriParser matching the //correct scheme is registered. } public static new bool Equals(Object objA, Object objB) { //Use this method for test for equality between two Uri's. //It will not change Uri.Equals() unfortunately, So whenever //you need to test for equality, use this. RFC 3261 defines //special rules for equality of SIP URI's, for example, //so a simple String.Equals() is not enough. } protected override bool IsWellFormedOriginalString(Uri uri) { //This method is similar to InitializeAndValidate. The //difference is that a valid URI is not necessarily //well-formed. You can use this to enforce certain //formatting rules if you wish. } protected override UriParser OnNewUri() { //This is fired when a new Uri is instantiated, and it //returns the new Uri in case you want to use it for //something. I still haven't found a use for this method. } protected override void OnRegister(string schemeName, int defaultPort) { //Whenever you register a parser with a scheme (I'll //cover this in the next section) this is fired. You //can check if the scheme is one that belongs to your //parser and store the defaultPort just in case a URI //doesn't specify it. } protected override string GetComponents(Uri uri, UriComponents components, UriFormat format) { //This method parses all the parts of a Uri. Uri exposes the //results of this method in a series of properties. You are //passed an enum telling you what parts to retrieve, and you //must parse them from the Uri given. } } The first thing that you might want to do is set up some Regex designed to parse out the different parts of your URI. If you use code snippets to set up a switch statement on the components parameter, you will be given a complete set of all the members of UriComponents. switch components UriComponents protected override string GetComponents(Uri uri, UriComponents components, UriFormat format) { switch (components) { case UriComponents.UserInfo: //Parse out and return user info case UriComponents.Port: //Parse out and return port //etc... } } All you need to do is apply the correct Regex in each case and return the value. Microsoft leaves out a few possibilities though. The first two are UriComponents.Path | UriComponents.KeepDelimiter and UriComponents.Query | UriComponents.KeepDelimiter (you can get rid of the case for UriComponents.KeepDelimiter on its own, it's just an option switch and shouldn't return anything). They return the path or query, respectively, with the leading delimiter intact (surprise). In SIP, you don't have queries or paths, so I made the Path component return the SIP parameters and the Query component return the headers, because the syntax for SIP headers is identical to HTTP queries. Adjustments like this may need to be made for your URI scheme. If you have any doubts, instantiate a new URI with a Google query. Run your program in debug mode, and step through the code to see what components are required when you access each property in Uri. Knowing what flags make up each components case will help you use GetComponents calls to reuse some parsing code. It also gives you a good idea of what you should be returning in each case. UriComponents.Path | UriComponents.KeepDelimiter UriComponents.Query | UriComponents.KeepDelimiter UriComponents.KeepDelimiter GetComponents I mentioned earlier that you need to register your UriParser before you can start instantiating Uris that require it. This associates the scheme string (i.e., "sip", "sips", "http") with a default port. Keep in mind that the scheme string must be present and greater than one character in length, and the port field must either be -1 or an integer exclusively between 0 and 65535. Here is some code to show you the right way to do it, and some ways that will fail: //This registers "sip" to port 5060 using a SipStyleUriParser UriParser.Register(new SipStyleUriParser(), "sip", 5060); //This registers "pres" with no default port using PresStyleUriParser UriParser.Register(new PresStyleUriParser(), "pres", -1); //InvalidOperationException! //The scheme http is already registered. This prevents the //possibility of having a scheme registered with two conflicting //parsers or default ports. UriParser.Register(new CustomHttpStyleUriParser(), "http", 80); //InvalidOperationException! //Even though the schemes are different, you can't have the same //instance parser parse more than one scheme. This makes sense if you //are working in multithreaded environments SipStyleUriParser s = new SipStyleUriParser(); UriParser.Register(s, "sip", 5060); UriParser.Register(s, "sips", 5061); I have included the source for my SipStyleUriParser with this article. It is fully RFC 3261 compliant, and even follows the rules for URI comparison. I have also included an easy way to parse headers and parameters into a Dictionary so that they may easily be checked against each other regardless of order, and so that the values can be retrieved by the parameter name. It successfully completes all the test cases given by the specifications. You are welcome to use it in your own applications, and please let me know if you have any suggestions. SipStyleUriParser Dictionary Despite MSDN's lack of documentation on the subject, writing your own UriParser is not very difficult. As long as you have a complete specification to work with, the implementation becomes fairly straightforward. Using this in combination with extensions of WebRequest and WebResponse will enable you to write a complete network stack! If you have any questions, comments, or suggestions, feel free to email me at augsod@gmail.com. WebRequest Web.
http://www.codeproject.com/Articles/13773/Writing-a-custom-UriParser-for-NET
CC-MAIN-2015-18
refinedweb
1,270
54.32
Looking for information on what's changed? See>. gem "jekyll-assets", group: :jekyll_plugins gem "jekyll-assets", git: "", group: :jekyll_plugins gem "jekyll-assets", "~> x.x.alpha", group: :jekyll_plugins ruby: 2.3+ sprockets: 3.3+ jekyll: 3.5+ If you'd like SourceMaps, or faster Sprockets, opt to use Sprockets 4.0, you can use it by placing it to your Gemfile. gem "sprockets", "~> 4.0.beta", { require: false }. source_maps: true # false on JEKYLL_ENV=production destination: "/assets" compression: true gzip: false defaults: js: { integrity: false } # true on JEKYLL_ENV=production css: { integrity: false } # true on JEKYLL_ENV=production img: { integrity: false } # true on JEKYLL_ENV=production caching: path: ".jekyll-cache/assets" type: file # Possible values: memory, file enabled: true # -- # Assets you wish to always have compiled. # This can also be combined with raw_precompile which # copies assets without running through the pipeline # making them ultra fast. # -- precompile: [] raw_precompile: [ # ] # -- # baseurl: whether or not to append site.baseurl # destination: the folder you store them in on the CDN. # url: the CDN url (fqdn, or w/ identifier). # -- cdn: baseurl: false destination: false url: null # -- # These are all default. No need to add them # Only use this if you have more. # -- sources: - assets/css - assets/fonts - assets/images - assets/videos - assets/javascript - assets/video - assets/image - assets/img - assets/js - _assets/css - _assets/fonts - _assets/images - _assets/videos - _assets/javascript - _assets/video - _assets/image - _assets/img - _assets/js - css - fonts - images - videos - javascript - video - image - img - js plugins: css: { autoprefixer: {}} img: { optim: {}} {% asset %}, <img> {% asset src @magick:2x alt='This is my alt' %} {% asset src @magick:2x alt='This is my alt' %} <img src="src" asset="@magick:2x" alt="This is my alt"> <img src="src" alt="This is my alt" asset> We provide several defaults that get set when you run an asset, depending on content type, this could be anything from type, all the way to integrity. If there is a default attribute you do not wish to be included, you can disable the attribute with !attribute, and it will be skipped over. {% asset img.png !integrity %} {% asset bundle.css !type %} Our tags will take any number of arguments, and convert them to HTML, and even attach them to your output if the HTML processor you use accepts that kind of data. This applies to anything but hashes, and arrays. So adding say, a class, or id, is as easy as doing id="val" inside of your tag arguments. Jekyll Assets uses @envygeeks liquid-tag-parser which supports advanced arguments (hash based arguments) as well as array based arguments. When you see something like k1:sk1=val it will get converted to k1 = { sk1: "val" } in Ruby. To find out more about how we process tags you should visit the documentation for liquid-tag-parser Jekyll Assets has the concept of responsive images, using the picture (when using @pic w/ srcset) and the <img> tag when using srcset. If you ship multiple srcset with your image, we will proxy, build and then ship out a picture/img tag with any number of source/srcset, and in the case of picture, with the original image being the image. <picture>usage, requires @pic {% asset img.png @pic srcset:max-width="800 2x" srcset:max-width="600 1.5x" srcset:max-width="400 1x" %} <picture> <source media="(max-width:800px)"> <source media="(max-width:600px)"> <source media="(max-width:400px)"> <img src="img.png"> </picture> <img>usage {% asset img.png srcset:width="400 2x" srcset:width="600 1.5x" srcset:width="800 1x" %} {% asset img.png srcset:width=400 srcset:width=600 srcset:width=800 %} <img > <img > If you set media, w/ max-width, min-width, we will not ship media, we will simply resize and assume you know what you're doing. Our parser is not complex, and does not make a whole lot of assumptions on your behalf, it's simple and only meant to make your life easier. In the future we may make it more advanced. We support liquid arguments for tag values (but not tag keys), and we also support Liquid pre-processing (with your Jekyll context) of most files if they end with .liquid. This will also give you access to our filters as well as their filters, and Jekyll's filters, and any tags that are globally available. {% asset '{{ site.bg_img }}' %} {% asset '{{ site.bg_img }}' proxy:key='{{ value }}' %} {% asset {{\ site.bg_img\ }} %} .sass, .scss body { background-image: asset_url("'{{ site.bg_img }}'"); background-image: asset_url("'{{ site.bg_img }}' proxy:key='{{ value }}'"); background-image: asset_url("{{\ site.bg_img\ }}"); } .liquid.ext .ext.liquid .bg { background: url(asset_path("{{ site.background_image }}")); } You have full access to your entire global context from any liquid processing we do. Depending on where you do it, you might or might not also have access to your local (page) context as well. You can also do whatever you like, and be as dynamic as you like, including full loops, and conditional Liquid, since we pre-process your text files. On Sprockets 4.x you can use .liquid.ext and .ext.liquid, but because of the way Sprockets 3.x works, we have opted to only allow the default extension of .ext.liquid when running on Old Sprockets (AKA 3.x.) If you would like Syntax + Liquid you should opt to install Sprockets 4.x so you can get the more advanced features. In order to import your Liquid pre-processed assets inside of Liquid or JS you should use a Sprockets //require=, Sprockets does not integrate that deeply into JavaScript and SASS to allow you to @import and pre-process. .sass, .scssHelpers We provide two base helpers, asset_path to return the path of an asset, and asset_url which will wrap asset_path into a url() for you, making it easy for you to extract your assets and their paths inside of SCSS. All other helpers that Sprockets themselves provide will use our asset_path helper, so you can use them like normal, including with Liquid body { background-image: asset_url("img.png"); } Any argument that is supported by our regular tags, is also supported by our .sass/ .scss helpers, with a few obvious exceptions (like srcset). This means that you can wrap your assets into magick if you wish, or imageoptim or any other proxy that is able to spit out a path for you to use. The general rule is, that if it returns a path, or @data then it's safe to use within .scss/ .sass, otherwise it will probably throw. body { background-image: asset_url("img.png @magick:half") } Note: we do not validate your arguments, so if you send a conflicting argument that results in invalid CSS, you are responsible for that, in that if you ship us srcset we might or might not throw, depending on how the threads are ran. So it might ship HTML if you do it wrong, and it will break your CSS, this is by design so that if possible, in the future, we can allow more flexibility, or so that plugins can change based on arguments. We provide all your assets as a hash of Liquid Drops so you can get basic info that we wish you to have access to without having to prepare the class. Note: The keys in the assets array are the names of the original files, e.g., use *.scss instead of *.css. {{ assets["bundle.css"].content_type }} => "text/css" {{ assets["images.jpg"].width }} => 62 {{ assets["images.jpg"].height }} => 62 The current list of available accessors: {% for k,v in assets %} {{ k }} {% endfor %} Using Liquid Drop assets, you can check whether an asset is present. {% if assets[page.image] %}{% img '{{ page.image }}' %} {% else %} {% img default.jpg %} {% endif %} {{ src | asset:"@magick:2x magick:quality:92" }} We have basic support for WebComponents when using Sprockets ~> 4.0.0.beta, this will allow you to place your HTML in the _assets/components folder, {% asset myComponent.html %}, and get an import, you can place your regular JS files inside of the normal structure. test.html <!DOCTYPE html> <html> <head> {% asset webcomponents.js %} {% asset test.html %} </head> <body> <contact-card starred> {% asset profile.jpg %} <span>Your Name</span> </contact-card> </body> </body> _assets/components/test.html > Jekyll::Assets::Hook.register :env, :before_init do append_path "myPluginsCustomPath" end Jekyll::Assets::Hook.register :config, :init do |c| c.deep_merge!({ plugins: { my_plugin: { opt: true } } }) end Your plugin can also register it's own hooks on our Hook system, so that you can trigger hooks around your stuff as well, this is useful for extensive plugins that want more power. Jekyll::Assets::Hook.add_point(:plugin, :hook) Jekyll::Assets::Hook.trigger(:plugin, :hook) { |v| v.call(:arg) } Jekyll::Assets::Hook.trigger(:plugin, :hook) do |v| instance_eval(&v) end gem "crass" Once crass is added, we will detect vendor prefixes, and add /* @alternate */ to them, with or without compression enabled, and with protections against compression stripping. gem "font-awesome-sass" @import "font-awesome-sprockets"; @import "font-awesome"; html { // ... } gem "autoprefixer-rails" assets: autoprefixer: browsers: - "last 2 versions" - "IE > 9" gem "boostrap-sass" # 3.x gem "bootstrap" # 4.x @import 'bootstrap' html { // ... } //=require _bootstrap.css //=require bootstrap/_reboot.css gem "mini_magick" See the MiniMagick docs to get an idea what <value> can be. * magick:format requires an ext or a valid MIME content type like image/jpeg or .jpg. We will ImageMagick -format on your behalf with that information by getting the extension. gem "image_optim" Check the ImageOptim to get idea about configuration options. assets: plugins: img: optim: default: verbose: true zero_png: advpng: level: 0 optipng: level: 0 pngout: strategy: 4 *Where preset is the name of the preset. Before cdn: After cdn: url: Before {% css css.css %} {% img image.jpg width:60 class:image %} {% js js.js %} After {% asset css.css %} {% asset image.jpg width=60 class=image %} {% asset js.js %} Before <link rel="apple-touch-icon-precomposed" href="{% asset_path icon.png %}"> <link rel="apple-touch-icon-precomposed" href="{% asset_data icon.png %}"> After <link rel="apple-touch-icon-precomposed" href="{% asset icon.png @path %}"> <link rel="apple-touch-icon-precomposed" href="{% asset icon.png @data %}">
https://recordnotfound.com/jekyll-assets-jekyll-21677
CC-MAIN-2020-29
refinedweb
1,665
57.98
In this post we will see how we can use Objects Literals when doing JavaScript OOP. We have seen how we can use function constructors to deal with objects, but JavaScript is an unbelievable flexible language, which offers different ways to solve the same problem. Object Literals Objects literals are probably the simplest way to create an object. An Object Literal is a comma separated list of name/value, wrapped in curly brackets like this: var person = { name: "John", age:35, greet: function () { console.log("Hello!"); } } Singletons with JavaScript Object Literals Notice that differently from when we use function constructors, we are defining and at the same time instantiating the object. We would use like this: person.greet(); Output: >> Hello! We have also effectively created a singleton. If we try to create a different instance of the same object like this, we will get unexpected results: var person = { name: "John", age:35, greet: function () { console.log("Hello!"); } } var child = person; child.name ='Bob'; console.log(child); console.log(person); Output: Object { name: "Bob", age: 35, greet: person.greet() } Object { name: "Bob", age: 35, greet: person.greet() } Oops, when we modified the attribute name of child, we modified also the same attribute in person. This happens because when we assigned child, we did not create a new object, but just a reference pointing to the person object. We could circumvent that by just cloning the object person: that is creating a copy of the object, and assigning it to our child variable. Notice, though, that if we were to do that, every time we cloned a new object, that object would have a copy of the functions, which is is not efficient from a memory usage point. If we are planning to create multiple instances of a same object, we could do it using function constructors, and making use of the prototype property, or as of ES6 using JavaScript classes. Practical uses of Objects Literals, and Singleton in JavaScript In programming we use singletons whenever we want to have a unique instance of an object, that many times (although not always) we want to access globally. An interesting use of Singleton in JavaScript is to simulate the use of namespaces: differently from Java, or C# JavaScript does not come with namespaces, or packages out of the box. Take for example this C# library: namespace UtilsLibrary { class LoggerUtils { public static void printMessage(string message){ //code here } public static void logMessage(string message){ //code here } } } In plain JavaScript it would look like this: function printMessage(message){ //code here } function logMessage(message){ //code here } Functional, but there are a couple of ugly things in this solution: - We are polluting the global space - Our functions may be overwritten by different library Indeed, if we were to add a third party JavaScript library that casually has a logMessage function, we would end up inadvertently overwriting our own function. In the long term that kind of programming results in code difficult to maintain, and to debug. Lets see how we can solve this problem applying what we have learned about Singletons, and Object Literals: // Our utils library var UtilsLibrary = { printMessage: function (message) { console.log("Message: "+message); }, logMessage: function (message) { console.log("Log: "+message); } } // Using our library UtilsLibrary.printMessage("A new message"); UtilsLibrary.logMessage("A new log"); This way we have encapsulated our code in a package like UtilsLibrary object. Besides being more organized, now we can have multiple logMessage functions, in our application, as long as they are in different namespaces. Thanks for reading. If you found this post useful, you can subscribe to my blog (click at the follow button at the bottom) so you can be notified of new posts.
https://developerslogblog.wordpress.com/2017/09/24/javascript-singleton-with-object-literals/
CC-MAIN-2017-47
refinedweb
615
60.04
Description: Threaded Discussion Forum that utilizes the .NET framework, with C# as the ASP.NET server side language. Uses MS Access Database for data. Thanks for your interest in the Discussion Forum. This provides a very simple Threaded Discussion Forum that allows the user to Add topics, then Post replies to that Topic. It is built on the .NET framework and uses C# as the ASP.NET server side language. Data is stored in a MS Access Database. Requirements: Installation Instructions: Use Cases: File Descriptions: C# Discussion Forum from Harrison Logic Introduction to Location Object in JavaScript using Excel = Microsoft.Office.Interop.Excel; Hi, i am getting this error " interop doesnot exist in namespace mocrosoft.office" can you plaese advice me... what office PIA i need to install for .net 3.5 framework if you can provide me some link... that would be good.... i have 2003 office installed in my system ya its ok its good Hi I want to allow users to select the location to save a file on button click from my C# web application... Its very urgent... please help me out... Thanks in advance for any help
http://www.c-sharpcorner.com/uploadfile/charrison/harrisonlogicdiscussionforums11232005054859am/harrisonlogicdiscussionforums.aspx
crawl-003
refinedweb
192
70.5
im_costra, im_sintra, im_tantra, im_acostra, im_asintra, im_atantra - basic trig functions #include <vips/vips.h> int im_costra(in, out) IMAGE *in, *out; int im_sintra(in, out) IMAGE *in, *out; int im_tantra(in, out) IMAGE *in, *out; int im_acostra(in, out) IMAGE *in, *out; int im_asintra(in, out) IMAGE *in, *out; int im_atantra(in, out) IMAGE *in, *out; Basic trig functions. The input image is mapped through either cos(3), sin(3) or tan(3) and written to out. All work in degrees. The size and number of bands are unchanged, the output type is float, unless the input is double, in which case the output is double. Non- complex images only! Each function returns 0 on success and -1 on error. im_add(3), im_lintra(3), im_abs(3), im_mean(3), im_logtra(3) National Gallery, 1995. J. Cupitt - 21/7/93 24 April 1991
http://huge-man-linux.net/man3/im_costra.html
CC-MAIN-2017-13
refinedweb
140
66.13
In this example, we will use feedback around a NOT gate which has a propogation delay. This results in a pulse at the output of the NOT gate with time period twice the propogation delay. #include <lcs/not.h> #include <lcs/simul.h> #include <lcs/changeMonitor.h> using namespace lcs; int main() { Bus<> b(0); Not<5> notGate(b, b); ChangeMonitor<> s1m(b, "Output", DUMP_ON); Simulation::setStopTime(50); Simulation::start(); return 0; } The following is the output when the above program is compiled and run. At time: 5, Output: 1 At time: 10, Output: 0 At time: 15, Output: 1 At time: 20, Output: 0 At time: 25, Output: 1 At time: 30, Output: 0 At time: 35, Output: 1 At time: 40, Output: 0 At time: 45, Output: 1 At time: 50, Output: 0 Below is the screenshot of the gtkwave plot of the generated VCD file.
http://liblcs.sourceforge.net/not_feed_back_example.html
CC-MAIN-2017-22
refinedweb
148
57.1
C Memory Management Techniques Disclaimer: C++ memory management has been addressed in a rather unique and interesting manner. This tutorial is not meant to be a drop-in replacement for the available tutorial (Defeating Mr. Memory Leak by KYA). Nor is this a replacement for the wonderful heap memory management tutorial by Martyn.Rae. What is this tutorial then? This tutorial is a C oriented tutorial-- not a C++ oriented tutorial. Furthermore, this tutorial is not Microsoft oriented. This tutorial attempts to stay operating system independent as much as possible. This tutorial will walk through our standard memory tools as well as provide a few tips to keep in mind while handling memory. In addition this tutorial will provide a list of useful programs to help detect leaks and handle memory. Throughout the tutorial there will be a few side bars. A side bar is extra information you may find useful or you may find it boring and off topic. In either case you can skip a side bar if you wish to strictly stay on topic. Our Objective Our objective is to prevent memory leaks. Preventing memory leaks is very important in C and C++. Other memory issues include bounds checking. I will not dive into bounds checking because I don't personally find it to be a memory management problem. I find bounds checking to be a careless programmer issue. However, I do suggest learning about bounds checking as it is very important. Again-- the focus of this tutorial is memory management, in particular, the prevention and detection of memory leaks. What are our weapons? When dealing with memory in C we generally have two weapons. The first weapon is a function to allocate memory. C provides a few of these functions such as malloc, calloc, and realloc. Our second weapon is free. Free is used to release the memory and give it back to the operating system. What happens after the operating system gets it is not relevent to us in most cases. C and C++ differ in terms of their memory weapons. C++ provides the new operation as well as C's allocation functions. Furthermore, C++ provides an additional version of free called delete. However, these are not of concern for us now but they are worth noting. Side Bar: When is it important to understand how the operating system handles freed memory? Most naive programmers assume the operating system completely erases memory. That is typically not the case. Most implementations do no bother with erasing memory. Instead, the operating system simply says, "This memory is writeable if we need it.. until then let's just leave it as it was." This can be important to understand. When can it be important? Well, one example is when you store important data in memory. Let's say we stored a password in memory and later we freed the memory that held the password. A naive programmer would assume the in-memory password is gone. An enlightened programmer would assume the operating system left the password in memory. The enlightened programmer would then step back and think this is bad. Why? Well anyone with a memory scanner or such utility could easily scanner around the memory looking for the password(s). A good idea would be to overwrite the password with bogus data before freeing it. C provides a number of handy functions for overwriting memory segments or zeroing them out-- such as memset and bzero. Do we have any more weapons?! You may assume your compiler is your friend, but when it comes to memory management most compilers don't concern themselves. The C language isn't very friendly either. The language assumes the programmer is right-- even if you do something incredibly odd, the language will assume you know what you are doing. In spite of this there are more weapons. We have additional software we can use to help us out! However, these tools will identify memory errors. Our main ambition should be to avoid memory errors by coding carefully. Even though that is our ambition it is still very important to verify our programs and these softwares can help. Valgrind (Linux) Insure++ (Linux and Windows) Purify (Linux and Windows) Many IDEs incorperate memory management tools. For instance Eclipse has a Valgrind plugin. A more interesting approach will be when IDEs have fully integrated memory management support. An interesting new C IDE which has such features is TenaciousC. Microsoft has a number of memory management tools in the VS IDE but as I stated before I am not trying to re-write the wonderful Microsoft heap guide! The tools should be used in addition to good coding techniques. As we have stated before-- tools can only detect errors but they can't detect all errors. Our first tool is to use good techniques so we don't cause errors to begin with. We can sum up good coding techniques with a few rules to follow. The rules to C memory management. Rule 1: When you allocate memory you need to eventually deallocate it! If you have 10 calls to malloc in a program you should have 10 calls to free in the same program. We can simplify this to, if you have X mallocs you need X frees or else you have a memory leak. This means you need to deallocate (free) memory even if the program is going to end. For example: #include <stdlib.h> int main(void) { int *memoryChunk = malloc(sizeof(int) * 100); /* we should free(memoryChunk); before returning */ return 0; } This program contains a memory leak. It's pretty simple right?! We allocated memory so we should have a free to match our call to malloc. If memory management was always so obvious we wouldn't have to worry about it! Here's a slightly more complicated and more likely case to go forgotten. ; } int main(void) { MyObject *myobject = newMyObject(); free(myobject); /* hah! we remembered to free it! right? No. */ return 0; } In this case we called malloc 2 times but called free only once. We clearly violated rule 1! That means a memory leak exists. Where? Well, we remembered to free myobject in main, but we didn't free the other memory chunk that is inside of myobject. Here's a simple way to fix this error: ; } void freeMyObject(MyObject *myobject){ free(myobject->memoryChunk); free(myobject); /* 2 calls to free */ } int main(void) { MyObject *myobject = newMyObject(); freeMyObject(myobject); return 0; } We fixed it by making sure we called free on the inner allocated memory before we call free on the main allocated memory. This brings us to rule 2. Rule 2: When you have a nested structure or complex mix of allocations it may be a good idea to create a wrapper around free to ensure all allocations are freed. This can be seen in the example above. Another example is within linked lists or other complex structures. In a linked list you must preserve references before you delete and node and free memory. If you carelessly change references without freeing the old nodes you will create a memory leak. In cases like this and cases like the above example-- it may be a great idea to wrap up these complex deallocations in a function to ensure they are done safe and consistently. I won't go into much more detail about data structures-- they deserve tutorials on their own and there are a few great ones available. Rule 3: Watch out for functions that allocate memory but don't deallocate it! This is in addition to rule 1. Actually, this is rule 1 in a ninja suit. There are a number of functions which allocate memory and don't let you know! An example is within the C standard library. #include <stdlib.h> #include <string.h> int main(void){ char *oldString = "I'm old!!!!"; char newStrig = strdup(oldString); /* we need to call free(newString); */ return 0; } In this case we can't see that memory has been allocated. It has been allocated behind our backs! When in doubt you should read man pages or other information regarding the standard library function. It's important to understand if the function is allocating memory behind your back. If it is-- we need to free it in order to not violate rule 1. Rule 4: If you doubt your counting ability, make a count variable to keep track of how many objects/allocations have been made. This is a debugging technique which may be helpful. Instead of trying to count how many calls to free you made in a large program-- you can toss in a variable that keeps track of calls to malloc. Then while you use a debugger you can keep track of how many calls are made to allocate memory at any given point in the program. Here's an overly simple example. #include <stdio.h> #include <stdlib.h> #define DEBUG #ifdef DEBUG int MyObjectAllocations = 0; #endif typedef struct MyObject MyObject; struct MyObject{ int *memoryChunk; }; MyObject* newMyObject(void){ MyObject *myobject = malloc(sizeof(MyObject)); myobject->memoryChunk = malloc(sizeof(int) * 10); /* 2 calls to malloc */ #ifdef DEBUG MyObjectAllocations += 2; #endif return myobject; } void freeMyObject(MyObject *myobject){ free(myobject->memoryChunk); free(myobject); /* 2 calls to free */ #ifdef DEBUG MyObjectAllocations -= 2; #endif } int main(void) { MyObject *myobject1 = newMyObject(); MyObject *myobject2 = newMyObject(); #ifdef DEBUG printf("\nMyObjectAllocations here... %d\n", MyObjectAllocations); #endif freeMyObject(myobject1); #ifdef DEBUG printf("\nMyObjectAllocations here... %d\n", MyObjectAllocations); #endif freeMyObject(myobject2); #ifdef DEBUG printf("\nMyObjectAllocations here... %d\n", MyObjectAllocations); #endif return 0; } It isn't the most attractive strategy, but it is something you can do all on your own without extra software. If you want a more attractive debugging method you can use better preprocessor directives and macros to make it much better. I tried to keep it simple so the tutorial example is understandable by everyone. The key point to take away from it is-- you can use coding techniques to help keep track of allocations. Side Bar: The preprocessor is amazing and enables you to make some impressive debugging statements that don't inject operations and size into your program's finished version. I suggest you look at a few resources and see if you can build a small debugging library of your own. The C Preprocessor The C preprocessor tutorial Wikipedia (has a nice section on debug print statements) Conclusion We can conclude memory management with a recap of our techniques. The golden technique when dealing with memory is to count your allocations and ensure you have an equal count of deallocations. The silver rule is to watch out for allocations done behind your back. The bronze medal goes to debugging tools and techniques that can help catch problems after they have occured. We have the techniques-- use em! Exercises I like to end a guide or tutorial with code samples or problems to solve. If I did a good job explaining memory management then the following exercises should be easy to solve. If you have questions concerning them then please ask-- it probably means I didn't explain something clearly enough. Which of the following examples have memory leaks? If a leak exists where is it? Exercise 1: #include <stdlib.h> typedef struct Node Node; struct Node{ Node *next; Node *previous; }; int main(void){ Node *node = malloc(sizeof(Node)); free(node); return 0; } Exercise 2: #include <stdlib.h> typedef struct Container Container; struct Container{ void *myBox; }; int main(void){ Container *container = malloc(sizeof(Container)); container->myBox = malloc(20); free(container->myBox); return 0; } Exercise 3: #include <stdlib.h> typedef struct Station Station; struct Station{ int *seats; Station *previousStation; }; Station* newStation(int size, Station* previousStation){ Station *station = malloc(sizeof(Station)); station->seats = malloc(sizeof(int) * size); station->previousStation = previousStation; return station; } void freeStation(Station* station){ free(station->seats); free(station); } int main(void){ Station *stn1 = newStation(10, NULL); Station *stn2 = newStation(20, stn1); freeStation(stn1); return 0; }
https://www.dreamincode.net/forums/topic/238977-c-memory-management-techniques/
CC-MAIN-2019-51
refinedweb
1,989
56.96
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. Amazon Elastic Block Store (Amazon EBS) sends data points to CloudWatch for several metrics. Most EBS volumes automatically send five-minute metrics to CloudWatch only when the volume is attached to an instance. This aws ebs metricset collects these Cloudwatch metrics for monitoring purposes. AWS Permissions The aws ebs metricset comes with a predefined dashboard. For example: Configuration example - module: aws period: 300s metricsets: - ebs # This module uses the aws cloudwatch metricset, all # the options for this metricset are also available here. Metrics Please see more details for each metric in ebs": { "VolumeId": "vol-6ae467c0" }, "namespace": "AWS/EBS" }, "metrics": { "BurstBalance": { "avg": 100 }, "VolumeIdleTime": { "sum": 299.97 }, "VolumeQueueLength": { "avg": 0.0001 }, "VolumeReadOps": { "avg": 0 }, "VolumeTotalWriteTime": { "sum": 0.03 }, "VolumeWriteBytes": { "avg": 10330.79802955665 }, "VolumeWriteOps": { "avg": 203 } } }, "cloud": { "account": { "id": "627959692251", "name": "elastic-test" }, "provider": "aws", "region": "us-east-1" }, "event": { "dataset": "aws.ebs", "duration": 115000, "module": "aws" }, "metricset": { "name": "ebs", "period": 10000 }, "service": { "type": "aws" } }
https://www.elastic.co/guide/en/beats/metricbeat/7.5/metricbeat-metricset-aws-ebs.html
CC-MAIN-2020-40
refinedweb
196
58.18
The ones who are crazy enough to think they can change the world are the ones who do.- Steve Jobs A Pointer is a variable that holds the address of another variable. In C, many complex tasks can be done only with pointers. data_type *variablename; int *iptr; //Integer Pointer variable long int *iptr; //long int Pointer variable char *iptr; //char Pointer variable int i = 5; int *iptr; //Declaring pointer variable iptr = &i ; //Initializing address of variable i to pointer variable iptr The above segment illustrates that, the pointer variable is initialized by an address of integer variable i. Dereferencing is an operation performed to access the value at particular address pointed to by a pointer. The operator * is used to returns the value stored at particular address. The operator * is known as value at address operator or dereferencing operator or indirection operator. #include <stdio.h> #include .<string.h> int main() { int i = 5; int *iptr; iptr = &i; printf("Value of i = %d ", i); printf("\nAddress of i = %u", &i); printf("\nValue of iptr = %d ", *iptr); printf("\nAddress of iptr = %u", &iptr); return 0; } Asterisks( * ) and address( & ) of operators are similar to positive and negative in mathematical point of view, which means appending * and & simultaneously to the pointer variable will cancelled by itself and just returns the address stored in it. i.e) *( &iptr ) == iptr #include <stdio.h> #include .<string.h> int main() { int i = 5; int *iptr; iptr = &i; printf("\nAddress of i = %u", &i); printf("\nAddress of iptr = %u", &iptr); printf("\nAddress of i = %u", *&iptr); return 0; } In the above program, third printf make sense. Simply asterisks( * ) and address( & ) of operators cancelled each other. We may make mistakes(spelling, program bug, typing mistake and etc.), So we have this container to collect mistakes. We highly respect your findings. We to update you
https://www.2braces.com/c-programming/c-pointers
CC-MAIN-2017-51
refinedweb
304
51.78
- NAME - SYNOPSIS - DESCRIPTION - USAGE - The Input Tree of Contents - Predicate Values - The Node Description Class - SEE ALSO - AUTHORS - THANKS NAME HTML::Widgets::NavMenu - A Perl Module for Generating HTML Navigation Menus SYNOPSIS use HTML::Widgets::NavMenu; my $nav_menu = HTML::Widgets::NavMenu->new( 'path_info' => "/me/", 'current_host' => "default", 'hosts' => { 'default' => { 'base_url' => "" }, }, 'tree_contents' => { 'host' => "default", 'text' => "Top 1", 'title' => "T1 Title", 'expand_re' => "", 'subs' => [ { 'text' => "Home", 'url' => "", }, { 'text' => "About Me", 'title' => "About Myself", 'url' => "me/", }, ], }, ); my $results = $nav_menu->render(); my $nav_menu_html = join("\n", @{$results->{'html'}}); DESCRIPTION This module generates a navigation menu for a site. It can also generate a complete site map, a path of leading components, and also keeps track of navigation links ("Next", "Prev", "Up", etc.) You can start from the example above and see more examples in the tests, and complete working sites in the Subversion repositories at and. USAGE my $nav_menu = HTML::Widgets::NavMenu->new(@args) To use this module call the constructor with the following named arguments: - hosts This should be a hash reference that maps host-IDs to another hash reference that contains information about the hosts. An HTML::Widgets::NavMenu navigation menu can spread across pages in several hosts, which will link from one to another using relative URLs if possible and fully-qualified (i.e: http://) URLs if not. Currently the only key required in the hash is the base_urlone that points to a string containing the absolute URL to the sub-site. The base URL may have trailing components if it does not reside on the domain's root directory. An optional key that is required only if you wish to use the "site_abs" url_type (see below), is trailing_url_base, which denotes the component of the site that appears after the hostname. For is /~myuser/. Here's an example for a minimal hosts value: 'hosts' => { 'default' => { 'base_url' => "", 'trailing_url_base' => "/", }, }, And here's a two-hosts value from my personal site, which is spread across two sites: 'hosts' => { 't2' => { 'base_url' => "", 'trailing_url_base' => "/", }, 'vipe' => { 'base_url' => "", 'trailing_url_base' => "/~shlomif/", }, }, - current_host This parameter indicate which host-ID of the hosts in hostsis the one that the page for which the navigation menu should be generated is. This is important so cross-site and inner-site URLs will be handled correctly. - path_info This is the path relative to the host's base_urlof the currently displayed page. The path should start with a "/"-character, or otherwise a re-direction excpetion will be thrown (this is done to aid in using this module from within CGI scripts). - tree_contents This item gives the complete tree for the navigation menu. It is a nested Perl data structure, whose syntax is fully explained in the section "The Input Tree of Contents". - ul_classes This is an optional parameter whose value is a reference to an array that indicates the values of the class="" arguments for the <ul>tags whose depthes are the indexes of the array. For example, assigning: 'ul_classes' => [ "FirstClass", "second myclass", "3C" ], Will assign "FirstClass" as the class of the top-most ULs, "second myclass" as the classes of the ULs inner to it, and "3C" as the class of the ULs inner to the latter ULs. If classes are undef, the UL tag will not contain a class parameter. - no_leading_dot When this parameter is set to 1, the object will try to generate URLs that do not start with "./" when possible. That way, the generated markup will be a little more compact. This option is not enabled by default for backwards compatibility, but is highly recommended. A complete invocation of an HTML::Widgets::NavMenu constructor can be found in the SYNOPSIS above. After you _init an instance of the navigation menu object, you need to get the results using the render function. $results = $nav_menu->render() render() should be called after a navigation menu object is constructed to prepare the results and return them. It returns a hash reference with the following keys: - 'html' This key points to a reference to an array that contains the tags for the HTML. One can join these tags to get the full HTML. It is possible to delimit them with newlines, if one wishes the markup to be easier to read. - 'leading_path' This is a reference to an array of node description objects. These indicate the intermediate pages in the site that lead from the front page to the current page. The methods supported by the class of these objects is described below under "The Node Description Component Class". This points to a hash reference whose keys are link IDs for the Firefox "Site Navigation Toolbar" ( ) and compatible programs, and its values are Node Description objects. (see "The Node Description Class" below). Here's a sample code that renders the links as <link rel=...>into the page header: my $nav_links = $results->{'nav_links_obj'}; # Sort the keys so their order will be preserved my @keys = (sort { $a cmp $b } keys(%$nav_links)); foreach my $key (@keys) { my $value = $nav_links->{$key}; my $url = CGI::escapeHTML($value->direct_url()); my $title = CGI::escapeHTML($value->title()); print {$fh} "<link rel=\"$key\" href=\"$url\" title=\"$title\" />\n"; } This points to a hash reference whose keys are link IDs compatible with the Firefox Site Navigation ( ) and its values are the URLs to these links. This key/value pair is provided for backwards compatibility with older versions of HTML::Widgets::NavMenu. In new code, one is recommended to use 'nav_links_obj'instead. This sample code renders the links as <link rel=...>into the page header: my $nav_links = $results->{'nav_links'}; # Sort the keys so their order will be preserved my @keys = (sort { $a cmp $b } keys(%$nav_links)); foreach my $key (@keys) { my $url = $nav_links->{$key}; print {$fh} "<link rel=\"$key\" href=\"" . CGI::escapeHTML($url) . "\" />\n"; } $results = $nav_menu->render_jquery_treeview() Renders a fully expanded tree suitable for input to JQuery's treeview plugin: - otherwise the same as render() . $text = $nav_menu->gen_site_map() This function can be called to generate a site map based on the tree of contents. It returns a reference to an array containing the tags of the site map. $url = $nav_menu->get_cross_host_rel_url_ref({...}) This function can be called to calculate a URL to a different part of the site. It accepts four named arguments, passed as a hash-ref: - 'host' This is the host ID - 'host_url' This is URL within the host. - 'url_type' 'rel', 'full_abs'or 'site_abs'. - 'url_is_abs' A flag that indicates if 'host_url'is already absolute. $url = $nav_menu->get_cross_host_rel_url(...) This is like get_cross_host_rel_url_ref() except that the arguments are clobbered into the arguments list. It is kept here for compatibility sake. The Input Tree of Contents The input tree is a nested Perl data structure that represnets the tree of the site. Each node is respresented as a Perl hash reference, with its sub-nodes contained in an array reference of its 'subs' value. A non-existent 'subs' means that the node is a leaf and has no sub-nodes. The top-most node is mostly a dummy node, that just serves as the father of all other nodes. Following is a listing of the possible values inside a node hash and what their respective values mean. - 'host' This is the host-ID of the host as found in the 'hosts'key to the navigation menu object constructor. It implicitly propagates downwards in the tree. (i.e: all nodes of the sub-tree spanning from the node will implicitly have it as their value by default.) Generally, a host must always be specified and so the first node should specify it. - 'url' This contains the URL of the node within the host. The URL should not contain a leading slash. This value does not propagate further. The URL should be specified for every nodes except separators and the such. - 'text' This is the text that will be presented to the user as the text of the link inside the navigation bar. E.g.: if 'text'is "Hi There", then the link will look something like this: <a href="my-url/">Hi There</a> Or <b>Hi There</b> if it's the current page. Not that this text is rendered into HTML as is, and so should be escaped to prevent HTML-injection attacks. - 'title' This is the text of the link tag's title attribute. It is also not processed and so the user of the module should make sure it is escaped if needed, to prevent HTML-injection attacks. It is optional, and if not specified, no title will be presented. - 'subs' This item, if specified, should point to an array reference containing the sub-nodes of this item, in order. - 'separator' This key if specified and true indicate that the item is a separator, which should just leave a blank line in the HTML. It is best to accompany it with 'skip'(see below). If 'separator'is specified, it is usually meaningless to specify all other node keys except 'skip'. - 'skip' This key if true, indicates that the node should be skipped when traversing site using the Mozilla navigation links. Instead the navigation will move to the next or previous nodes. - 'hide' This key if true indicates that the item should be part of the site's flow and site map, but not displayed in the navigation menu. - 'role' This indicates a role of an item. It is similar to a CSS class, or to DocBook's "role" attribute, only induces different HTML markup. The vanilla HTML::Widgets::NavMenu does not distinguish between any roles, but see HTML::Widgets::NavMenu::HeaderRole. - 'expand' This specifies a predicate (a Perl value that is evaluated to a boolean value, see "Predicate Values" below.) to be matched against the path and current host to determine if the navigation menu should be expanded at this node. If it does, all of the nodes up to it will expand as well. - 'show_always' This value if true, indicates that the node and all nodes below it (until 'show_always' is explicitly set to false) must be always displayed. Its function is similar to 'expand_re'but its propagation semantics the opposite. - 'url_type' This specifies the URL type to use to render this item. It can be: 1. "rel"- the default. This means a fully relative URL (if possible), like "../../me/about.html". 2. "site_abs"- this uses a URL absolute to the site, using a slash at the beginning. Like "/~shlomif/me/about.html". For this to work the current host needs to have a 'trailing_url_base'value set. 3. "full_abs"- this uses a fully qualified URL (e.g: with the beginning, even if both the current path and the pointed path belong to the same host. Something like. - 'rec_url_type' This is similar to 'url_type'only it recurses, to the sub-tree of the node. If both 'url_type'and 'rec_url_type'are specified for a node, then the value of 'url_type'will hold. - 'url_is_abs' This flag, if true, indicates that the URL specified by the 'url'key is an absolute URL like should not be treated as a path within the site. All links to the page associated with this node will contain the URL verbatim. Note that using absolute URLs as part of the site flow is discouraged because once they are accessed, the navigation within the primary site is lost. A better idea would be to create a separate page within the site, that will link to the external URL. - li_id This is the HTML ID attribute that will be assigned to the specific <li>tag of the navigation menu. So if you have: 'tree_contents' => { 'host' => "default", 'text' => "Top 1", 'title' => "T1 Title", 'expand_re' => "", 'subs' => [ { 'text' => "Home", 'url' => "", }, { 'text' => "About Me", 'title' => "About Myself", 'url' => "me/", 'li_id' => 'about_me', }, ], }, Then the HTML for the About me will look something like: <li id="about_me"> <a href="me/About Me</a> </li> Predicate Values An explicitly specified predicate value is a hash reference that contains one of the following three keys with their appropriate values: - 'cb' => \&predicate_func This specifies a sub-routine reference (or "callback" or "cb"), that will be called to determine the result of the predicate. It accepts two named arguments - 'path_info'which is the path of the current page (without the leading slash) and 'current_host'which is the ID of the current host. Here is an example for such a callback: sub predicate_cb1 { my %args = (@_); my $host = $args{'current_host'}; my $path = $args{'path_info'}; return (($host eq "true") && ($path eq "mypath/")); } - 're' => $regexp_string This specifies a regular expression to be matched against the path_info (regardless of what current_host is), to determine the result of the predicate. - 'bool' => [ 0 | 1 ] This specifies the constant boolean value of the predicate. Note that if 'cb' is specified then both 're' and 'bool' will be ignored, and 're' over-rides 'bool'. Orthogonal to these keys is the 'capt' key which specifies whether this expansion "captures" or not. This is relevant to the behaviour in the breadcrumbs' trails, if one wants the item to appear there or not. The default value is true. If the predicate is not a hash reference, then HTML::Widgets::NavMenu will try to guess what it is. If it's a sub-routine reference, it will be an implicit callback. If it's one of the values "0", "1", "yes", "no", "true", "false", "True", "False" it will be considered a boolean. If it's a different string, a regular expression match will be attempted. Else, an excpetion will be thrown. Here are some examples for predicates: # Always expand. 'expand' => { 'bool' => 1, }; # Never expand. 'expand' => { 'bool' => 0, }; # Expand under home/ 'expand' => { 're' => "^home/" }, # Expand under home/ when the current host is "foo" sub expand_path_home_host_foo { my %args = (@_); my $host = $args{'current_host'}; my $path = $args{'path_info'}; return (($host eq "foo") && ($path =~ m!^home/!)); } 'expand' => { 'cb' => \&expand_path_home_host_foo, }, The Node Description Class When retrieving the leading path or the nav_links_obj, an array of objects is returned. This section describes the class of these objects, so one will know how to use them. Basically, it is an object that has several accessors. The accessors are: - host The host ID of this node. - host_url The URL of the node within the host. (one given in its 'url' key). - label The label of the node. (one given in its 'text' key). This is not SGML-escaped. - title The title of the node. (that can be assigned to the URL 'title' attribute). This is not SGML-escaped. - direct_url A direct URL (usable for inclusion in an A tag ) from the current page to this page. - url_type This is the url_type(see above) that holds for this node. SEE ALSO See the article Shlomi Fish wrote for Perl.com for a gentle introduction to HTML-Widgets-NavMenu: An HTML::Widgets::NavMenu sub-class that contains support for another role. Used for the navigation menu in. - HTML::Widget::SideBar A module written by Yosef Meller for maintaining a navigation menu. HTML::Widgets::NavMenu originally utilized it, but no longer does. This module does not makes links relative on its own, and tends to generate a lot of JavaScript code by default. It also does not have too many automated test scripts. - HTML::Menu::Hierarchical A module by Don Owens for generating hierarchical HTML menus. I could not quite understand its tree traversal semantics, so I ended up not using it. Also seems to require that each of the tree node will have a unique ID. - HTML::Widgets::Menu This module also generates a navigation menu. The CPAN version is relatively old, and the author sent me a newer version. After playing with it a bit, I realized that I could not get it to do what I want (but I cannot recall why), so I abandoned it. AUTHORS Shlomi Fish, <shlomif@cpan.org>, . THANKS Thanks to Yosef Meller () for writing the module HTML::Widget::SideBar on which initial versions of this modules were based. (albeit his code is no longer used here). You can use, modify and distribute this module under the terms of the MIT X11 license. ( ).
https://metacpan.org/pod/release/SHLOMIF/HTML-Widgets-NavMenu-1.0702/lib/HTML/Widgets/NavMenu.pm
CC-MAIN-2015-35
refinedweb
2,648
62.38
Fun with Ruby method argument defaults Just yesterday I suddenly understood that there is a small neat trick, allowing to provide friendly error messages for missing method parameters: # default way to do things: def read_data(file) # ... end read_data('1.txt') # => works read_data # wrong number of arguments (given 0, expected 1) (ArgumentError) # ^^ not really friendly def read_data(file:) # ... end read_data # missing keyword: file (ArgumentError) # ^^ at least some hint of what was expected # But how about this? def read_data(file = raise(ArgumentError, '#read_data requires path to .txt file with data in proper format')) # ... end read_data # #read_data requires path to .txt file with data in proper format (ArgumentError) # ^^ isn't it nice?.. Of course, it is not that useful with a simple method with one argument, but for complicated APIs with several keyword args, it might be of some use. But what I was pleasantly surprised with is how simple it is—and that it works. How it works? The argument = raise(...) is not some separate Ruby feature, but it is a natural consequence of two facts: - You can put any Ruby expression as the argument’s default value, and it will be evaluated in the same context as the method’s body, on each method call (when the argument is not provided) raiseis just a method, not some special syntax, and like any other method call, it is an expression and can be put as an argument’s default value. “Any expression”? Really? Yep. You can do even this (though you probably shouldn’t!): def read_data(file = begin puts "Using default argument" if Time.now.hour < 12 'morning.txt' else 'evening.txt' end end) puts "Reading #{file}" end read_data # Prints: # Using default argument # Reading evening.txt As was already said above, the context of evaluation is the same as for method body, and all default values are evaluated sequentially, so you can do this (and probably shouldn’t!): class ArgsTracker attr_reader :args def initialize @args = [] end def track( a: begin; args << :a; 100 end, b: begin; puts "a was #{a}"; args << :b end) end end tracker = ArgsTracker.new tracker.track # Prints: "a was 100", and adds [:a, :b] to tracker tracker.track(a: 5) # Prints: "a was 5", and adds only [:b] (which was not provided) to tracker tracker.args # => [:a, :b, :b] Cool. Ugly, but cool. How is this useful? The fact that default values are calculated on each call, and in the context of called class, have some simple and useful consequences. Probably you already have seen and used some of them: def log(something, at: Time.now) # will be calculated at each call of log, when alternative at: is not provided #... end def setup_output(out: $stdout, err: $stderr, warn: out) # default output device for warn would be always # the same as `out` # ... end class A def process(order: default_order) # will call the same object's method to calculate default end private def default_order # some complicated calculation, depending on the object's state end end More advanced usage Besides the example from which we have started, one might think about other relatively sane but not very simple usages of the on-the-fly calculation, for example, tracking of default values usage (might be useful on legacy refactoring, when we aren’t sure whether defaults are used at all, but can’t allow ourselves to just break the codebase): def log_default(name, value) # or logger.debug puts "#{caller.first}: default value for #{name} was invoked from #{caller[2]}" value end # Now change this: def some_method(factor: 100) end #...to this: def some_method(factor: log_default(:factor, 100)) end # ...and... some_method # Logs: # ...in `some_method': default value for factor was invoked from `some_other_method' One might also imagine my initial example (with fail) extended for some very friendly API to use like fun(arg: friendly_fail(:arg)), which fetches large explanatory string from constant/ i18n config, enriches it with calling context (like, “if caller contains this, we are saying this shouldn't be called from <framework>”) and raises Very Friendly Exception. Not you should do something like this anytime soon, but rather “it is interesting that you can, and probably someday you’d like to try”. Have fun!
https://zverok.github.io/blog/2020-11-19-arg_defaults.html
CC-MAIN-2021-10
refinedweb
692
60.65
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Compilers are sometimes full of surprises and such strange errors happened in the course of the development that I wanted to list the most fun for readers’ entertainment. VC8: template <class StateType> typename ::boost::enable_if< typename ::boost::mpl::and_< typename ::boost::mpl::not_< typename has_exit_pseudo_states<StateType>::type >::type, typename ::boost::mpl::not_< typename is_pseudo_exit<StateType>::type >::type >::type, BaseState*>::type I get the following error: error C2770: invalid explicit template argument(s) for '`global namespace'::boost::enable_if<...>::...' If I now remove the first “::” in ::boost::mpl , the compiler shuts up. So in this case, it is not possible to follow Boost’s guidelines. VC9: This one is my all times’ favorite. Do you know why the exit pseudo states are referenced in the transition table with a “submachine::exit_pt” ? Because “exit” will crash the compiler. “Exit” is not possible either because it will crash the compiler on one machine, but not on another (the compiler was installed from the same disk). Sometimes, removing a policy crashes the compiler, so some versions are defining a dummy policy called WorkaroundVC9. Typeof: While g++ and VC9 compile “standard” state machines in comparable times, Typeof (while in both ways natively supported) seems to behave in a quadratic complexity with VC9 and VC10. eUML: in case of a compiler crash, changing the order of state definitions (first states without entry or exit) sometimes solves the problem. g++ 4.x: Boring compiler, almost all is working almost as expected. Being not a language lawyer I am unsure about the following “Typeof problem”. VC9 and g++ disagree on the question if you can derive from the BOOST_TYPEOF generated type without first defining a typedef. I will be thankful for an answer on this. I only found two ways to break the compiler: Add more eUML constructs until something explodes (especially with g++-4.4) The build_terminate function uses 2 mpl::push_back instead of mpl::insert_range because g++ would not accept insert_range. You can test your compiler’s decltype implementation with the following stress test and reactivate the commented-out code until the compiler crashes.
http://www.boost.org/doc/libs/1_57_0/libs/msm/doc/HTML/ch04s05.html
CC-MAIN-2014-49
refinedweb
370
53
Data Persistence With ConfigObj Contents Introduction Note Since the introduction of the unrepr mode to ConfigObj, there is now a better way of doing data-persistence with ConfigObj. The techniques and code discussed in this article are still useful for automatically creating a configspec. This beats creating them by hand. ConfigObj is a pure python module for the easy reading and writing of application configuration data. It uses an ini file like syntax - similar to the ConfigParser module - but with much greater power. ConfigObj can store nested sections. A section maps members (values) to names. This is bascially what the Python dictionary object does, and so we use the dictionary to represent a section. Every value can be a single value or a list. Individual values are stored as strings - but using the validate module they can be transparently translated to and from floats, booleans or integers [1]. This means that the ConfigObj can naturally represent Python data structures comprised of dictionaries [2], lists, strings, floats, booleans and integers. This is most of the basic datatypes. This article discusses using ConfigObj for the common programmer's task of data-persistence - the storing and retrieving of data structures based on the Python dictionary. Along the way we evolve a set of tools (with a high level interface) to do this. Hint You can see the final results of this article as the module ConfigPersist.py. The Problem There are some restrictions though. ConfigObj can't just be used to represent arbitrary data structures - even if all the members are allowed types. - Although dictionaries can be nested, they can't be inside lists. - Lists also can't be nested inside each other [3]. - Values other than strings need a schema (a configspec) to convert them back into the right type. - Dictionary keys must be strings. - It is actually impossible to store a string containing single triple quotes (''') and double triple quotes ("""). - List members cannot contain carriage returns. (Single line values only). [4] ConfigObj isn't a data persistence module - this list of restrictions tells you that much. However if you examine the typical data structures used in your programs you may find that these restrictions aren't a problem for many of them. Why Not Pickle ? Why would we want to do this ? Well, the usual method for preserving data structures is the Python pickle module. This can store and retrieve a much wider range of objects - with none of the restrictions above. However : - Pickles aren't human readable or writeable. This makes ConfigObj ideal for debugging, or where you want to manually modify the data. - Pickles are unsafe - a maliciously crafted pickle can cause arbitrary code execution. - ConfigObj is slightly easier to use - data = ConfigObj(filename) and data.write(). Of these, the first two reasons are the most compelling. So we've looked at the sort of data that ConfigObj can and can't store. We still have a big problem. ConfigObj is designed for storing strings - this means that our data will have been converted to strings when we read it back in. The configspec If you know the datatype of each member then you can write a configspec. If you pass this into the ConfigObj when you read the config file [5] then you can call the validate method. This uses the configspec to transform the values into the expected data types. It will even transform each member of list values into the right type. Note In fact the configspec does more than just specify the type of each member. It can be used to specify the bounds or parameter of each value. So if your data structure is always going to have members of the same type (but possibly different values) you could write a configspec for it. That sounds like hard work though . Let's write a function that will automatically generate a configspec for a ConfigObj. Note If all your values are strings, you don't need to use a configspec. Lists will automatically be converted into lists of strings without needing validation. Creating a configspec A configspec is a dictionary of checks for a section. In the first step we'll walk a ConfigObj and create a configspec for it. The types we'll check for are strings, booleans, integers, and floats. We'll also check for lists of these types. the check is done using an isinstance test - so subclasses are allowed (but won't be recreated when read from the file). This function modifies a ConfigObj inplace - so it doesn't return anything. It will overwrite any existing configspec. """ A function that adds a configspec to a ConfigObj. Will only work for ConfigObj instances using basic datatypes : * floats * strings * ints * booleans * Lists of the above """ config.configspec = {} for entry in config: val = config[entry] if isinstance(val, dict): # a subsection add_configspec(val) elif isinstance(val, bool): config.configspec[entry] = 'boolean' elif isinstance(val, int): config.configspec[entry] = 'integer' elif isinstance(val, float): config.configspec[entry] = 'float' elif isinstance(val, str): config.configspec[entry] = 'string' elif isinstance(val, (list, tuple)): list_type = None out_list = [] for mem in val: if isinstance(mem, str): this = 'string' elif isinstance(mem, bool): this = 'boolean' elif isinstance(mem, int): this = 'integer' elif isinstance(mem, float): this = 'float' else: raise TypeError('List member "%s" is an innapropriate type.' % mem) if list_type and this != list_type: list_type = 'mixed' elif list_type is None: list_type = this out_list.append(this) if list_type is None: l = 'list(%s)' else: list_type = {'integer': 'int', 'boolean': 'bool', 'mixed': 'mixed', 'float': 'float', 'string': 'string' }[list_type] l = '%s_list(%%s)' % list_type config.configspec[entry] = l % str(out_list)[1:-1] # else: raise TypeError('Value "%s" is an innapropriate type.' % val) Having created a configspec you should then be able to call validate and have it return True : from validate import Validator vtor = Validator() config = ConfigObj(filename) add_configspec(config) assert config.validate(vtor) == True Next thing to do is to retrieve the configspec as a list of lines. For this we'll need a new function. This function assumes you have already called add_configspec. """Return the configspec (of a ConfigObj) as a list of lines.""" out = [] for entry in config: val = config[entry] if isinstance(val, dict): # a subsection m = config.main._write_marker('', val.depth, entry, '') out.append(m) out += write_configspec(val) else: name = config.main._quote(entry, multiline=False) out.append("%s = %s" % (name, config.configspec[entry])) # return out This function now returns a configspec that we can use to validate a ConfigObj. It will also restore the type of any non-string values. # set some non string values config['member 1'] = 3 config['member 2'] = 3.0 config['member 3'] = True config['member 4'] = [3, 3.0, True] add_configspec(config) configspec = write_configspec(config) # lets create a copy of the original config # and validate it with the configspec we made b = ConfigObj(config.write(), configspec=configspec) assert b.validate(vtor) == True assert b == config The Next Step Great - so we now have a way of storing data structures and restoring the values with the correct type. The only problem is that we have to store the type information separately from the actual data - what a nuisance. Wouldn't it be funky if we could store the type info in the data structure. Obviously we'd want to read and write this transparently. Saving it is easy. We create a new subsection in each section called __types__. This contains a dictionary with a copy of the configspec in it. When we call the write method this will automatically get saved out for us . """ Turns the configspec attribute of each section into a member of the section. (Called ``__types__``). You must have already called ``add_configspec`` on the ConfigObj. """ for entry in config.sections: add_typeinfo(config[entry]) config['__types__'] = config.configspec That looks like it should work. What about reading this back in ? We'll need to do the opposite of course. """Turns the '__types__' member of each section into a configspec.""" for entry in config.sections: if entry == '__types__': continue typeinfo_to_configspec(config[entry]) config.configspec = config['__types__'] del config['__types__'] Putting this together avoids the need for the write_configspec stage : # set some non string values config['member 1'] = 3 config['member 2'] = 3.0 config['member 3'] = True config['member 4'] = [3, 3.0, True] # create a copy to test # because add_typinfo modifies the config orig = ConfigObj(config) add_configspec(config) add_typeinfo(config) config.filename = 'test.ini' config.write() b = ConfigObj('test.ini') typeinfo_to_configspec(b) assert b.validate(vtor) == True assert b == orig So now we have two ways of saving and restoring data structures. Both of them involve calling add_configspec(config) first. Then we can either call write_configspec(config) or add_typeinfo(config). write_configspec is useful where we repeatedly work with similar data structures. If they all have the same structure (or type signature) then a single configspec will work repeatedly. In this case it makes sense to store it separately. If we just want to store and restore an arbitrary data structure (within our limitations) then we should use add_typeinfo and typeinfo_to_configspec. In both cases the intermediate file that is saved is simple enough to be edited by hand. Final Step We can see in the examples above that our conversions are done with simple two step processes. Like all programmers I am lazy and would prefer this to be a simple one step process. Lets create a couple of convenience functions that do them in a single step : from validate import Validator except ImportError: vtor = None else: vtor = Validator() def store(config): """" Passed a ConfigObj instance add type info and save. Returns the result of calling ``config.write()``. """ add_configspec(config) add_typeinfo(config) return config.write() def restore(stored): """ Restore a ConfigObj saved using the ``store`` function. Takes a filename or list of lines, returns the ConfigObj instance. Uses the built-in Validator instance of this module (vtor). Raises an ImportError if the validate module isn't available """ config = ConfigObj(stored) if vtor is None: raise ImportError('Failed to import the validate module.') typeinfo_to_configspec(config) config.validate(vtor) return config def save_configspec(config): """Creates a configspec and returns it as a list of lines.""" add_configspec(config) return write_configspec(config) These functions are all designed to work with dictionary like data structures containing the basic data types. You can initialise a ConfigObj instance from the dictionary (config = ConfigObj(dict)) - so it's a very easy to use technique. If you wanted to extend this system to work with additional data types it wouldn't be hard. You would need to add functions to your Validator instance (see the validate docs) and also amend the functions here to handle the extra types. You can download these functions as a module called ConfigPersist.py. Note Perhaps the most serious restriction is that dictionary keys must be strings. It would be possible to walk a dictionary converting all keys to strings - where the string contains type info. (e.g. 3 becomes '3:int', 3.0 becomes '3.0:float', '3' to '3:str' etc). You would need another function that walks the ConfigObj and rebuilds the dictionary, restoring the type of the keys. I leave this as an exercise to the reader - not least because it reduces the readability of the saved config file. Feb 15 13:42:08 2008. Counter...
http://www.voidspace.org.uk/python/articles/configobj_for_data_persistence.shtml
crawl-002
refinedweb
1,881
58.69
The packers are just tools that try to encrypt a program to make it hard to be reversed. These tools can be used by many as anti-crackers and more in worms and viruses. Reversing the packer is a waste of time and not a good strategy, so there's the solution. You can create an unpacker for these types of packers with only 10 lines, and maybe create a tool that unpacks most of the known packers. You can create this unpacker easily with Pokas x86 Emulator. Many generic unpackers appeared, but none of them were easily usable and hence customizable. However, this tool is maybe the best tool in the unpacking field right now as it is very easy to use, very easy to be customized and open source. Now you will ask me, what's the meaning of Emulator? Emulator is a program that simulates the processor and memory. Do what they do... diassembles an instruction and runs it in virtual registers and memory. These programs don't run without stopping, but they stop by a breakpoint you added in the debugger that is included in the emulator. This breakpoint should stop the emulator at the OEP (the original entry point). Pokas x86 Emulator is an Application-Only emulator created for generic unpacking and testing the antivirus detection algorithms. This Emulator has many features. Some of them are: seh tib teb peb peb_ldr_data GetModuleHandleA LoadLibrayA GetProcAddress VirtualAlloc VirtualFree VirtualProtect You can get the emulator from here. You will find a good documentation of everything in the emulator. Some people who love challenges will ask why I should program an unpacker even if it's only near 10 lines code with this emulator? I want to say that coding the unpacker is the last step you should do. First, you should begin with reversing and finding the all anti-unpackers tricks. Then you should find the suitable breakpoint that bypasses all the packer defences and then begin converting all these into code. I don't want to dig into the emulator, but I think it is like this: Now I will describe the emulator components and classes that are in the UML figure. System: This is the Emulated Operating System and CPU. This class is used for disassembling and emulating assembly instructions and APIs. It also has the ability to assemble and disassemble any instruction from or to mnemonics for easy to debug or fix any error and also it could be used in any application as a separate component. Process: This class is the emulated application. This class is the real emulator as it calls the system functions to emulate, calls the debugger for checking the breakpoints, manage the threads and handle the exceptions (if SEH is enabled). It also manages the virtual memory and creates the PEB structure. Thread: This class contains all variables that we need to emulate with. It contains the registers, Eip and EFlags and fs segment. It doesn't have the debug registers (because there is no need for it) and all segments except fs. It also creates the TEB structure and handles the SEH. Virtual Memory: This class is used to emulate the memory. It doesn't support pages, but it simply adds a pointer and the equivalent virtual pointer and the size of that buffer. It also monitors the memory writes and detects the invalid pointers and the writing on a read only page. It also supports VirtualProtect API. VirtualProtect Stack: It's a small class used to push or pop a value from or to the stack and saves the top and bottom of stack in two variables, but it doesn't support resizing. Debugger: This class is used for adding the stop condition for the emulator. It takes a condition in a string format (ex: "eip==0x4001000 && ecx>=100") and this condition is checked with every instruction emulated by emulate or emulatecommand functions. The feature of this debugger that it doesn't decrease the performance as it parses the string and converts it into assembly and then converts it by the assembler into a native code to be run in every check on this breakpoint. string eip==0x4001000 && ecx>=100 emulate emulatecommand PE: This is not a class. It's a library that contains separate functions for working with the PE executables. It contains two functions: PELoader PEDump Don't worry if you don't understand many of the descriptions above. You will see that the usage of this emulator is very simple: Let's see an example: #include <cstdlib> #include <iostream> #include "x86emu.h" using namespace std; int main() { //First we will create the Environment Variables //the Environment Variables is just some parameters or setting //will be passed to the System EnviromentVariables* vars= (EnviromentVariables*)malloc (sizeof(EnviromentVariables)); memset( vars,0,sizeof(EnviromentVariables)); //this variable should be adjusted to make the system perform well. //This path is the path to the folder that contain //the important DLLs which are ("kernel32.dll","ntdll.dll","user32.dll") vars->dllspath="C:\\Windows\\System32\\"; //here we will set the Imagebase of the kernel32.dll and user32.dll vars->kernel32=(dword)GetModuleHandleA("kernel32.dll"); vars->user32=(dword)LoadLibraryA("user32.dll"); //there's other variables but we can ignore the right now //now we will create the new system System* sys=new System(vars); define_dll("gdi32.dll",vars->dllspath,0x75DE0000); //now We Will create a new Process Process* c; try{ c=new Process(sys,"upx.exe"); //process take two parameters system & //the program filename }catch(int x){ cout << "Error : File name not found\n"; }; //Adding new breakpoint is an easy task c->debugger->AddBp("__isdirty(eip)"); //there's two commands to emulate c->emulate() & //c->emulatecommand() but emulatecommand(int) int x=c->emulate();//*/ if (x!=EXP_BREAKPOINT){ //there are other exceptions like //invalid pointer and so on return an error for them cout << "Error = " << x << "\n"; }else{ //Dump the PE file here PEDump(c->GetThread(0)->Eip,c,"test.exe"); } Int3 Breakpoint or Hardware on Execution "Eip ==0x00401000" Memory on Access or Write "__lastaccessed()==0x00401000" "__lastmodified()==0x00401000" Execution on Modified Data "__isdirty(eip)" In .text section only "__isdirty(eip) && eip>=0x401000 && eip <=0x405000" Anti-unpackers trick : write "ret" on the real OEP and calls to it "__isdirty(eip) && ( __ read(eip) & 0xff ) !=0xC3)" API Hooking "__isapi()" "__isapiequal('Getprocaddress')" // not case sensitive in the api name For compiling the program, I usually use Dev-cpp and gcc, but I think VC++ will not create any problem I think. You will need x86emu.dll and x86emu.a (You must add them to the linker from project options->parameters->add library or object) and some header files included in x86emu-bin.zip. After execution, you will see a program named test.exe that should be the unpacked application. If it doesn't run, change the compatibility from right click->properties->compatibility-> and change the Windows version and it should run correctly. You can get more information on solving the problem by reading the emulator documentations in x86emu-doc.zip and email me with any problem at amr.thabet[at]student.alx.edu.eg. In the next article, we will expose the features of this emulator and we will also cover the entire steps to write an unpacker including reversing, analysing and programming. This is an a screenshot of an example we will talk about in the next article and is included in the examples (attached with the article). Hope you like this emulator and the whole article. See you soon in the next article. I love programming very much. I always wanted to know about all technologies, how they were created and how I can create the same. From this, I began digging into the processor and Windows internals. Everyone learns high level languages and I learn the lowest level. I try to know how these high level programming languages work and how Windows works. I begin digging and digging, and still I haven't reached the bottom, but I'll always.
http://www.codeproject.com/Articles/99873/Write-Your-Own-Unpacker?fid=1582422&df=10000&mpp=50&noise=5&prof=True&sort=Position&view=None&spc=Relaxed&select=4427381&fr=1
CC-MAIN-2016-18
refinedweb
1,334
54.42
Forum:Names of Admins in various lists From Uncyclopedia, the content-free encyclopedia Having just gotten my name added to the above MediaWiki file, I observed that the desired effect--rendering Admins' names in boldface in Special:RecentChanges--is not working. Puppy verifies this but adds that he doesn't like boldfacing anyway, as there is another useful use of bold in this report. I seem to recall that my initial reaction to this was the same. I think I know the cause of this, and have a recoding that is working at home, and extends to the Watchlist and even listings of differences between versions of pages. If youse want to call out the names of Admins, what I would do is (with side-padding and a non-repeating background image) add a tiny image file to one side; probably a star-shaped Sheriff's badge. (Not, please, a swastika, nor an animated Mordillo Jew-ninja.) I am not sure this issue is ready for a vote, but there are four possible directions: - Sure; the image sounds cool. (Such as Romartus .) - Just get the frickin' boldfacing to work right. - I think Admins suck and I don't want to see any special treatment of their names in lists. - (Or perhaps: I am a cheap bastard with metered Internet and resent having to reload a list including every fallen-away Admin every time I clear my cache.) - "I don't care" seems like a very valid response as well. Also, I welcome comments on whether the same policy should extend beyond RecentChanges to pages that normal people actually view. Spıke Ѧ 13:34 14-Feb-13 - I'm happier without admins name in bold, as Spike points out. Although I'd happily create a .js or .css that anyone could access if they wanted it. Having said that, Spike already has. And the site .css is a bloated mess as it is. But if anyone can point out the original vote to implement this I'd happily go with that rather than re-vote and just get the stupid thing working. In other words, 4, with a hint of 3, followed by 2, but 1 also sounds cool. • Puppy's talk page • 01:53 14 Feb Spike can't see this, so I will describe it as a pic of a very handsome sherrif, with a nice horse and a saloon woman on each arm. - You mean a vote on if when admins make an edit that it would show up in recent changes with a sherriff's badge next to it? hahahahahahah chokechoke hahahahahah hehehehehehehehehhahaheheh, huh? Aleister 21:17 (can't Spike just add the badge at the end of his signature, which would solve the problem of not having a star?) - For what I see at home, see the illustration on Puppy's page. I take your response as a No to the star. Do you have an opinion on the other issues? Spıke Ѧ 21:24 14-Feb-13 - hahahahahahahaha oh, if I chose one, it would be four "I don't care", or, if pushed, I'd make sure admins had trumpet music playing throughout the site every time they made an edit. Not a long song, but just a few notes of a trumpet, like when the prince of Wales enters the ballroom and each and all stop what they're doing and look at him. Puppy can probably set that up as a sound-file, and connect it through the internet tubes into the sound systems on these computer machines, and maybe that would be my choice but since it's not an option, number seven above would work. Aleister a little bit of background, as you earlier asked me in this interview, in the middle '90s I decided I could solve some of these problems, and was led to a West African dirigible. - p.s. I just looked at the illustration on Puppy's page. hahahahahahahahahaha Anyway, I'm a blogger here, so I have no opinions except it would be a god holy mess looking at recent changes if those huge blocks were splattered all over the place. I've had Mordillo tied-up in my basement for a couple of years now, and he still is spinning like a star. —The preceding unsigned comment was added by Unknown (talk • contribs) - Wow. An real picture of Mordillo. That proves you have him. If you return him to us we will pay you £1.37. - —The preceding unsigned comment was added by CouldHaveBeenAnyone (talk • contribs) 01:31, February 15, 2013 (UTC) - While again giving a nod to the validity of option 4 ("I don't care"), my preference is option 3 (no mandatory highlighting in any report). At home, I now have sheriff's stars for every Admin, old Nipper beside Puppy's name just for helping out, something too for Aleister's name just for doing whatever it is that he does, and garish highway signs to call out my own edits. I don't care that anyone else have any of the above, and no one (metered Internet or not) should have to repeatedly download a list of us because someone (we) wants us to be shown in boldface. For anyone who wants this, the code should be moved to a separate file here and UN:HAX should tell users how to invoke it. Spıke Ѧ 12:30 15-Feb-13 A little test I see that this topic has dropped on the floor with a clang. I just got a listing that actually shows Admins' names in bold (by specifying Hide grouped recent changes) which produces a "list." Normally I see a "table" and they don't. Am going to edit MediaWiki:Common.css to change "Mnbvcxz" so it is in bold everywhere that MediaWiki knows it's a link to his user page. (If I picked Puppy, all hell would break loose.) This is just a little test which, in a couple days without howls of protest, I'll generalize. Again, I am equally happy deleting these formatting commands entirely, but it seems nobody cares. Spıke Ѧ 11:45 23-Feb-13 - I'm still of the opinion delete it but create a script that can be imported into a personal css to have the same impact. Given UN:N, I don't see a point. • Puppy's talk page • 01:32 23 Feb 2013 Implemented It strikes me that, after 9 days, we have a unanimous (except for Aleister shouting "present" at the top of his lungs) vote. The current boldfacing is described in UN:HAX and I'll update it. (Why am I seeing no section-numbering in this article?) I'll apply the same edit to all the Admin names, "de-opp" Mnbvcxz, and move the result into a new file. (Puppy: Into which namespace should the code be moved?) Spıke Ѧ 15:29 23-Feb-13 - I'd pop it into MediaWiki:BoldAdmin.css or something similar. (I have no idea on the section numbering there.) • Puppy's talk page • 09:58 23 Feb 2013 Done; at MediaWiki:Admin names in bold.css. I'll document it in the HAX. Spıke Ѧ 22:20 23-Feb-13 - Actually - now that you've done that, did you want to instead: - Rename MediaWiki:Admin names in bold.css to MediaWiki:Gadget-Admin_names_in_bold.css - Create MediaWiki:Gadget-Admin_names_in_bold with the text: Display [[Uncyclopedia:Hacks#Administrators_listed_in_boldface_in_reports|administrators names in bold]] on special pages - Add Admin_names_in_bold|Admin_names_in_bold.cssto MediaWiki:Gadgets-definition#interface-gadgets? - That may be an easier way for non-code savvy people to access this gadget, as that will add it to Special:Preferences. • Puppy's talk page • 10:57 23 Feb 2013 I am all in favor of such people having an easier alternative to copying CSS code. Adding it to My preferences is great, provided that unchecking the box deletes the code, which it doesn't sound like. For starters, can we tell people to include the file by name rather than by physical copy? Can someone put @import at the start of his user CSS? Spıke Ѧ 23:38 23-Feb-13
http://uncyclopedia.wikia.com/wiki/Forum:Names_of_Admins_in_various_lists
CC-MAIN-2015-40
refinedweb
1,367
70.63
> Another possibility might be to use a news reader like Tin on a unix > shell and somehow export selected articles to a text file which may be > possible to convert to soup. Any thoughts on this? If you have access to a Unix shell, all of this becomes easy. Most Unix shell newsreaders, such as trn, permit you to browse a list of newgroup topics, mark the topics, and then write only the marked articles to a (Unix) file. You then download this file any way you want (e.g., zmodem, ftp -- whatever) and: import -r filename imports the file with your selected topics directly into yarn. If you have Unix shell access you can also do the same thing with the uqwk program: uqwk, which is free, has a summary mode that collects you a file of subjects only from newsgroups. You then edit this file on Unix (or if you must on dos), leaving only the subjects you want. uqwk will then deliver to yarn a soup packet containing only the articles that correspond to your selected subjects. There are several dos programs designed just to edit files with subject lines from news groups. But I think it is unnecessary to transfer a file from unix to dos, and then back to unix -- just to edit a file in dos. I can easily do it in Unix with just about any unix editor.
http://www.vex.net/yarn/list/199902/0029.html
crawl-001
refinedweb
235
77.06
13 Deploying Bokeh Apps ¶ In the previous sections we discovered how to use a HoloMap to build a Jupyter notebook with interactive visualizations that can be exported to a standalone HTML file, as well as how to use DynamicMap and Streams to set up dynamic interactivity backed by the Jupyter Python kernel. However, frequently we want to package our visualization or dashboard for wider distribution, backed by Python but run outside of the notebook environment. Bokeh Server provides a flexible and scalable architecture to deploy complex interactive visualizations and dashboards, integrating seamlessly with Bokeh and with HoloViews. Bokeh server apps can be used for a wide range of applications, but here we will show how to use them with Datashader and related libraries: For a detailed background on Bokeh Server see the Bokeh user guide . In this tutorial we will discover how to deploy the visualizations we have created so far as a standalone Bokeh Server app, and how to flexibly combine HoloViews and Panel to build complex apps. We will also reuse a lot of what we have learned so far---loading large, tabular datasets, applying Datashader operations to them, and adding linked Streams to our app. A simple Bokeh app ¶ The preceding sections of this tutorial focused solely on the Jupyter notebook, but now let's look at a bare Python script that can be deployed using Bokeh Server: with open('apps/server_app.py', 'r') as f: print(f.read()) import os import dask.dataframe as dd import holoviews as hv from holoviews.operation.datashader import datashade hv.extension('bokeh') # 1. Load data and Datashade it ddf = dd.read_parquet(os.path.join(os.path.dirname(__file__),'..','..','data','nyc_taxi_wide.parq'))[['dropoff_x', 'dropoff_y']].persist() points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y']) shaded = datashade(points).opts(plot=dict(width=800, height=600)) # 2. Instead of Jupyter's automatic rich display, render the object as a bokeh document doc = hv.renderer('bokeh').server_doc(shaded) doc.title = 'HoloViews Bokeh App' Step 1 of this app should be very familiar by now -- declare that we are using Bokeh to render plots, load some taxi dropoff locations, declare a Points object, Datashade them, and set some plot options. At this point, if we were working with this code in a notebook, we would simply type shaded and let Jupyter's rich display support take over, rendering the object into a Bokeh plot and displaying it inline. Here, step 2 adds the code necessary to do those steps explicitly: - get a handle on the Bokeh renderer object using hv.renderer - create a Bokeh document from shadedby passing it to the renderer's server_docmethod - optionally, change some properties of the Bokeh document like the title. This simple chunk of boilerplate code can be added to turn any HoloViews object into a fully functional, deployable Bokeh app! Deploying the app ¶ Assuming that you have a terminal window open with the pyviz environment activated, in the ../apps/ directory, you can launch this app using Bokeh Server: bokeh serve --show server_app.py If you don't already have a favorite way to get a terminal, one way is to open it from within Jupyter , then make sure you are in the ../apps directory, and make sure you are in the right Conda environment if you created one (activating it using source activate pyviz (or activate pyviz on Windows)). # Exercise: Modify the app to display the pickup locations and add a tilesource, then run the app with bokeh serve # Tip: Refer to the previous notebook Building an app with custom widgets ¶ The above app script can be built entirely without using Jupyter, though we displayed it here using Jupyter for convenience in the tutorial. Jupyter notebooks are also often helpful when initially developing such apps, allowing you to quickly iterate over visualizations in the notebook, deploying it as a standalone app only once we are happy with it. In this section we will combine everything we have learned so far including declaring of various parameters to control our visualization using a set of widgets. We begin as usual with a set of imports: import holoviews as hv, geoviews as gv, param, dask.dataframe as dd, panel as pn from colorcet import cm from bokeh.document import Document from holoviews.operation.datashader import rasterize, shade from holoviews.streams import RangeXY from cartopy import crs hv.extension('bokeh', logo=False)
http://pyviz.org/tutorial/13_Deploying_Bokeh_Apps.html
CC-MAIN-2018-47
refinedweb
727
50.77
Qt Remote Objects Remote Object Concepts Qt in QtRO) is forwarded to the true object (called a Source in QtRO) for handling. Updates to the Source (either property changes or emitted Signals) are forwarded to every Replica. A Replica is a light-weight proxy for the Source object, but one that supports the same connections and behavior of QObjects, which makes them as easy to use as any other QObject provided by Qt. Everything needed for the Replica to look like the Source object is handled behind the scenes by QtRO. Note that Remote Objects behave differently from traditional remote procedure call (RPC) implementations. In RPC, the client makes a request and waits for the response. In RPC, the server does not push anything to the client unless it is in response to a request. The design of RPC is often such that different clients are independent of each other (for instance, two clients can ask a mapping service for directions and get different results). While it is possible to implement this in QtRO (as Source without properties, and Slots that have return values), it is designed more to hide the fact that the processing is really remote. You let a node give you the Replica instead of creating it yourself, possibly use the status signals (isReplicaValid()), but then interact with the object like you would with any other QObject-based type. Related Information Getting Started To enable Qt Remote Objects in a project, add this directive into the C++ files: #include <QtRemoteObjects> To link against the Qt Remote Objects module, add this line to the project file: QT += remoteobjects Guides - Overview Qt Remote Objects - Qt Remote Objects C++ Classes - Qt Remote Objects Nodes - Qt Remote Objects Source Objects - Qt Remote Objects Replica Objects - Qt Remote Objects Registry - Qt Remote Objects Compiler - Remote Object Interaction - Using Qt Remote Objects - Troubleshooting Qt Remote Objects Reference Qt Remote Objects.
http://doc-snapshots.qt.io/qt5-5.11/qtremoteobjects-index.html
CC-MAIN-2018-26
refinedweb
317
50.91
Closed Bug 918987 Opened 8 years ago Closed 7 years ago Implement String .prototype .normalize Categories (Core :: JavaScript Engine, defect) Tracking () mozilla31 People (Reporter: evilpie, Assigned: arai) Details (Keywords: dev-doc-complete, feature, relnote, Whiteboard: [js:p2:fx31][DocArea=JS][qa-]) Attachments (1 file, 3 obsolete files) * String.prototype.normalize(form = "NFC") - section 21.1.3.12 Returns a Unicode normalization form of the string. [2] OS: Linux → All Hardware: x86_64 → All I guess I'll take this up then. :) Assignee: general → mz_mhs-ctb Assignee: mz_mhs-ctb → nobody Comment on attachment 8369130 [details] [diff] [review] implement String.prototype.normalize and its test created from NormalizationTest.txt I think Waldo is the more appropriate reviewer for this patch. Attachment #8369130 - Flags: review?(luke) → review?(jwalden+bmo) Whiteboard: [js:p2:fx30] Comment on attachment 8369130 [details] [diff] [review] implement String.prototype.normalize and its test created from NormalizationTest.txt Review of attachment 8369130 [details] [diff] [review]: ----------------------------------------------------------------- Tests are tl;dr for now. :-) (And honestly, I don't know a thing about Unicode normalization, except haziness about the idea. I'm happy to read up and learn, but it might be worth having someone knowledgeable about normalization look at them.) That said, if you generated these test files, we want the script that created them included in the patch. Far easier to review such a script (and more easily assume correctness of the input data), than to review its output and be forced to assume the generation was done correctly. ::: js/src/js.msg @@ +223,5 @@ > MSG_DEF(JSMSG_NEGATIVE_REPETITION_COUNT, 170, 0, JSEXN_RANGEERR, "repeat count must be non-negative") > MSG_DEF(JSMSG_INVALID_FOR_OF_INIT, 171, 0, JSEXN_SYNTAXERR, "for-of loop variable declaration may not have an initializer") > MSG_DEF(JSMSG_INVALID_MAP_ITERABLE, 172, 0, JSEXN_TYPEERR, "iterable for map should have array-like objects") > MSG_DEF(JSMSG_NOT_A_CODEPOINT, 173, 1, JSEXN_RANGEERR, "{0} is not a valid code point") > +MSG_DEF(JSMSG_INVALID_NORMALIZE_FORM, 174, 0, JSEXN_RANGEERR, "form must be one of \"NFC\", \"NFD\", \"NFKC\", or \"NFKD\"") Single-quote the nested strings here for readability. ::: js/src/jsstr.cpp @@ +52,5 @@ > #include "vm/Interpreter-inl.h" > #include "vm/String-inl.h" > #include "vm/StringObject-inl.h" > > +#include "unicode/unorm.h" This needs #if EXPOSE_INTL_API #include ... #endif as long as we support building without ICU and the Internationalization API. (Which is the case right now as regards Firefox for Android/FxOS, although eventually we'll fix that.) @@ +839,5 @@ > return true; > } > #endif > > +/* ES6 20140210 draft 21.1.3.12. */ And probably this whole method should be in similar ifdefs. (Thus String.prototype.normalize would only be present if Intl is also present -- doesn't seem unreasonable to me in the short run, while ICU isn't built everywhere.) @@ +845,5 @@ > +str_normalize(JSContext *cx, unsigned argc, Value *vp) > +{ > + CallArgs args = CallArgsFromVp(argc, vp); > + > + // Steps 1, 2, and 3 Minor nitpicks: // Step 1. // Steps 2-4. // Steps 5-6. and so on for all of these, periods at end, dashes for ranges. @@ +854,5 @@ > + // Step 4 > + UNormalizationMode form; > + if (!args.hasDefined(0)) > + form = UNORM_NFC; > + else { Style -- if one arm of an if is braced, both arms are supposed to be: if (!args.hasDefined(0)) { form = UNORM_NFC; } else { @@ +863,5 @@ > + > + // Step 7 > + if (StringEqualsAscii(formStr, "NFC")) > + form = UNORM_NFC; > + else if (StringEqualsAscii(formStr, "NFD")) Instead of using StringEqualsASCII, please add all these strings to vm/CommonPropertyNames.h, then compare against cx->names().NFC and such. Also, these one-line bodies need braces, same as above. @@ +878,5 @@ > + } > + > + // Step 8 > + JSString *ns; > + int32_t srcLen = str->length(); Use #include "mozilla/Casting.h" and mozilla::SafeCast<int32_t>(str->length()) here, please. @@ +880,5 @@ > + // Step 8 > + JSString *ns; > + int32_t srcLen = str->length(); > + if (srcLen > 0) { > + const UChar *srcChars = reinterpret_cast<const UChar *>(str->getChars(cx)); You need to null-check this and return false if so. As it is now, if |this| is a rope string, this could return null, and that would be Bad. Also, #include "builtin/Intl.h" and use JSCharToUChar. @@ +883,5 @@ > + if (srcLen > 0) { > + const UChar *srcChars = reinterpret_cast<const UChar *>(str->getChars(cx)); > + UErrorCode status = U_ZERO_ERROR; > + int32_t nsLen = unorm_normalize(srcChars, srcLen, form, 0, > + nullptr, 0, &status); This probably works, but I have to imagine it's pretty slow to do a full normalization pass just to get ultimate length, then allocate memory, then do *another* normalization pass to write out the data. So we should normalize into a StringBuffer to attempt to avoid two-pass slowness -- see intl_FormatNumber for how that would work. (Duplicate the static const size_t SB_LENGTH = 32; thing for now -- we might make it a constant in StringBuffer, but that can be followup cleanups to what you're doing.) @@ +898,5 @@ > + } > + ns = js_NewStringCopyN<CanGC>(cx, nsChars, nsLen); > + js_free(nsChars); > + } else > + ns = cx->runtime()->emptyString; Don't bother with the empty-string optimization here, just normalize no matter what. @@ +3862,5 @@ > #else > JS_FN("localeCompare", str_localeCompare, 1,JSFUN_GENERIC_NATIVE), > #endif > JS_SELF_HOSTED_FN("repeat", "String_repeat", 1,0), > + JS_FN("normalize", str_normalize, 0,JSFUN_GENERIC_NATIVE), And this entry as well. Attachment #8369130 - Flags: review?(jwalden+bmo) Oh, sorry for the delay here. :-( Haven't been keeping on top of reviews well lately, which is totally my fault for holding you up here. Thank you for reviewing! Added make-string-normalize-tests.py which makes test input data, and string-normalize-input.js as the result of the script, which was string-normalize-part*.js in last patch. Also added empty string test and JSRope test in string-normalize.js Attachment #8369130 - Attachment is obsolete: true Attachment #8387429 - Flags: review?(jwalden+bmo) Whiteboard: [js:p2:fx31] → [js:p2:fx31][DocArea=JS] Comment on attachment 8387429 [details] [diff] [review] addressing review comments Review of attachment 8387429 [details] [diff] [review]: ----------------------------------------------------------------- Blah, I fail again at speedy review turnaround. :-( I promise I'll do better next time, seeing as I actually understand what's being done in this patch (well, in the tests) at this point and won't have to take hours to grok it. Brief summary is, the code looks good excepting trivial style nits, but I'd like some changes to the test to enhance readability and to not burden our testing infrastructure quite so hard. Should be pretty quick to fix. ::: js/src/jit-test/tests/basic/make-string-normalize-tests.py @@ +11,5 @@ > +import re, sys > + > +sep_pat = re.compile(' +') > +def to_code_list(codes): > + return ', '.join(map(lambda x: '0x{0}'.format(x), re.split(sep_pat, codes))) Hmm. I guess writing out numbers is okay to avoid the whole non-BMP UTF-16-ification issue here. But we should really be careful here -- all those statically-unnecessary bits of conversion seem ripe for removal if this test needs speeding up. @@ +35,5 @@ > + outf.write('[') > + for i in range(1, 6): > + if i > 1: > + outf.write(', ') > + outf.write('[{list}]'.format(list=to_code_list(m.group(i)))) While writing this data out as an array is all well and good, I would prefer if it were written out as an object for greater clarity of use. Something like: + if len(sys.argv) > 1: > + dir = sys.argv[1] For simplicity, let's just always require that a directory be specified on the command line. No need to worry about the cwd when running the script, then. ::: js/src/jit-test/tests/basic/string-normalize.js @@ +1,1 @@ > +/* Test String.prototype.normalize */ Could you rename this file to string-normalize-generateddata.js? There may well be more normalization tests over time, and I really don't want people piling on this single test every time -- or making it harder to find the test for some specific aspect of String.prototype.normalize. And, a somewhat more important concern. Right now, this test runs as a jit-test. That means that on Tinderbox, it runs a bunch of times, one for each JIT flags variant. For a very large test like this one, that's pretty wasteful. Please generate this file into js/src/tests/ecma_6/String/ (create the directory, with empty shell.js/browser.js and the appropriate jstests.list in it, if needed). You'll have to mark it as shell-only using "// |reftest| skip-if(!xulRuntime.shell) -- uses shell load() function" at the top of the file and |if (typeof reportCompare === "function") reportCompare(true, true);| at the end, to make it a valid jstest. @@ +3,5 @@ > +load(libdir + 'asserts.js'); > +load('tests/basic/string-normalize-input.js'); > + > +function runTest(test) { > + let [c1, c2, c3, c4, c5] = test.map(t => t.map(x => String.fromCodePoint(x)).join("")); With the object-ification mentioned in the test-data file, this becomes function codePointsToString(points) { return points.map(x => String.fromCodePoint(x)).join(""); } let source = codePointsToString(test.source); let NFC = codePointsToString(test.NFC); let NFD = codePointsToString(test.NFD); let NFKC = codePointsToString(test.NFKC); let NFKD = codePointsToString(test.NFKD); Less compact, sure. But I think a whole lot more readable. (The c1/r1 names are pure artifacts of the original data source -- we absolutely shouldn't preserve them in this test source.) @@ +4,5 @@ > +load('tests/basic/string-normalize-input.js'); > + > +function runTest(test) { > + let [c1, c2, c3, c4, c5] = test.map(t => t.map(x => String.fromCodePoint(x)).join("")); > + let [r1, r2, r3, r4, r5] = test.map(t => t.map(x => x.toString(16)).join(",")); And then these become function stringify(points) { return points.map(x => x.toString(16)).join(); } let sourceStr = stringify(test.source); let nfcStr = stringify(test.NFC); let nfdStr = stringify(test.NFD); let nfkcStr = stringify(test.NFKC); let nfkdStr = stringify(test.NFKD); @@ +62,5 @@ > + runTest(test); > +} > + > +/* not listed in Part 1 */ > +for (let x = 0; x <= 0x2FFFF; x ++) { Get rid of the space between x and ++. So. This test is overall O(0x2FFFF) runtime, which is kinda big. Granted each individual item there is smallish. But overall, is this going to end up being a relatively slow test overall? @@ +67,5 @@ > + if (part1.has(x)) { > + continue; > + } > + let c = String.fromCodePoint(x); > + assertEq(c.normalize(), c, "NFC of " + x.toString(16)); Add a |let xstr = x.toString(16);| and use that in all these asserts. ::: js/src/jsstr.cpp @@ +38,5 @@ > #include "jstypes.h" > #include "jsutil.h" > > #include "builtin/RegExp.h" > +#include "builtin/Intl.h" I before R, so one line earlier. @@ +844,5 @@ > } > #endif > > +#if EXPOSE_INTL_API > +static const int32_t SB_LENGTH = 32; Blank line after this, please -- and make it size_t, not int, because that's the natural type for a size/count. @@ +887,5 @@ > + Rooted<JSFlatString*> flatStr(cx, str->ensureFlat(cx)); > + if (!flatStr) > + return false; > + const UChar *srcChars = JSCharToUChar(flatStr->chars()); > + int32_t srcLen = mozilla::SafeCast<int32_t>(flatStr->length()); Add |using mozilla::SafeCast;| amidst the other using-mozilla::* at the start of the file, alphabetically, please, so you can kill the mozilla:: prefix. @@ +901,5 @@ > + return false; > + status = U_ZERO_ERROR; > + unorm_normalize(srcChars, srcLen, form, 0, > + JSCharToUChar(chars.begin()), size, > + &status); Let's do #ifdef DEBUG int32_t finalSize = #endif unorm_normalize(...); MOZ_ASSERT(size == finalSize || U_FAILURE(status), "unorm_normalize behaved inconsistently"); to catch any ICU bugs here. Attachment #8387429 - Flags: review?(jwalden+bmo) → feedback+ Thank you for your reviewing and letting me know about readability! > Hmm. I guess writing out numbers is okay to avoid the whole non-BMP > UTF-16-ification issue here. But we should really be careful here -- all > those statically-unnecessary bits of conversion seem ripe for removal if > this test needs speeding up. Yeah, converting code points to UTF-16 string may speed up "runTest" function. However, to compare code point in "not listed in Part 1" tests, raw code point may be better than UTF-16 string. > Could you rename this file to string-normalize-generateddata.js? Since those test files are moved into "String" directory, also removed "string-" prefix from their filenames, as other tests does (e.g. codePointAt.js). > So. This test is overall O(0x2FFFF) runtime, which is kinda big. Granted > each individual item there is smallish. But overall, is this going to end > up being a relatively slow test overall? Measured "real" time taken to run whole jstest on Mac OS X, results are following: opt build original: 1m5.168s add all normalize test: 1m5.969s : increase about 0.8s (1.2%) without "not listed in Part 1": 1m5.596s : increase about 0.4s (0.6%) debug build original: 11m45.046s add all normalize test: 11m56.427s : increase about 10s (1.6%) without "not listed in Part 1": 11m48.009s : increase about 3s (0.4%) Is this acceptable? Attachment #8387429 - Attachment is obsolete: true Attachment #8399904 - Flags: review?(jwalden+bmo) Oh good grief no. :-) That's an order of magnitude or two too slow. We can't have tests taking a full minute *anywhere*. It is absolutely imperative that this test be split up. Right now one of our longest tests, on my machine (new, good laptop, unplugged), takes 35.8s in a debug build. That's probably pretty close to an upper bound on how long a test can take. So this test is going to need splitting up for sure, at least for now. (Maybe ICU fixes will let us claw it back, sometime.) I wonder if it's normalization that's inherently slow, ICU's implementation of it, or what, exactly, here. Now, how to split it up. You don't have obvious cut points here. We may have to revert to the strategy of some of the tests in ecma_5/Object/, and have one shared file that a bunch of people include, that has a function that lets the execution space be partitioned reasonably. Or perhaps you have better ideas -- if so, go for 'em. I have more comments to post on the patch, but I should get this out in advance of them, in case you've got down time to burn now. :-) Sorry, my report was not clear. Normalization test does not take 1 minute or 11 minutes, those values are time taken to running all js tests (so, running 6225 tests). Single "normalize-generateddata.js" test takes 0.8 seconds with opt build, and 10 seconds with debug build. However, those values depend on machine speed, so I compared whole time with and without "normalize" test. Comment on attachment 8399904 [details] [diff] [review] addressing review comments Review of attachment 8399904 [details] [diff] [review]: ----------------------------------------------------------------- Ten seconds for one test is just fine. Whew! I was a little surprised at the times you were claiming; perhaps I should have been even more surprised and suspected something off. Just to note, because of the file moves and all, it's pretty hard to review this. :-) I had to manually munge patches to remove the generated-data file, and to deal with the file name changes. Not a big deal, just something to keep in mind when posting patches with new-file name changes in them, and for super-large patches where the generated parts of them dwarf the non-generated parts whose changes are the truly important parts. Things look good here! I'll make the one noted change, push this to try, then land if that pans out. ::: js/src/tests/ecma_6/String/make-normalize-generateddata-input.py @@ +3,5 @@ > +""" Usage: make-normalize-generateddata-input.py PATH_TO_MOZILLA_CENTRAL > + > + This script generates test input data for String.prototype.normalize > + from intl/icu/source/data/unidata/NormalizationTest.txt > + to js/src/jit-test/tests/basic/normalize-generateddata-input.js This path's wrong. Attachment #8399904 - Flags: review?(jwalden+bmo) → review+ Replaced double quotation in '#include "unicode/unorm.h"' with angle bracket, in jsstr.cpp. I've just learned of check_spidermonkey_style.py. It seems that I should run it before sending patch, in addition to jstests. Attachment #8399904 - Attachment is obsolete: true Comment on attachment 8404261 [details] [diff] [review] use angle bracket in include Review of attachment 8404261 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsstr.cpp @@ +55,5 @@ > #include "vm/String-inl.h" > #include "vm/StringObject-inl.h" > > +#if EXPOSE_INTL_API > +#include <unicode/unorm.h> <>-style includes will pick up system headers, but this isn't a system header, it's part of our build system. The real solution isn't to do this, but to modify the style-checking script for this file. See config/check_spidermonkey_style.py. I've done that locally, things work, will push a patch with that change. (I also moved the #include into alphabetical order among the other "x/Y.h" includes, for semi-consistency with builtin/Intl.cpp. I'm not sure we're 100% on board with that style, but might as well start a trend.) Thanks for the patch! We need to update and to account for this addition. Any chance you could start those pages off, then we can get people who wordsmith regularly to polish them to perfection? Thank you for your assistance! I updated those 2 pages, without compatibility table in "normalize" page. I'll update it and other pages (ES6 support, release notes, ...) after the patch was merged to mozilla-central. Assignee: nobody → arai_a Status: ASSIGNED → RESOLVED Closed: 7 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla31 Updated following pages: (compatibility table) By the way, it seems that I'm assigned to QA Contact, what should I do for it? Just verifying the functionality is available in Nightly? I don't think this needs any testing. QA Contact: arai_a Whiteboard: [js:p2:fx31][DocArea=JS] → [js:p2:fx31][DocArea=JS][qa-] (In reply to Tooru Fujisawa [:arai] from comment #19) > Updated following pages: Looking good! I tend to leave dev-doc-needed on these things so that MDN peoples can make sure such changes get hooked up in all the right places. (You may have found all of them already, but I'm not confident of that.) So let's leave it here, and if more is needed, they can poke us (and if not they'll just change the keyword themselves). > By the way, it seems that I'm assigned to QA Contact, what should I do for > it? > Just verifying the functionality is available in Nightly? My mistake, I meant to set Assignee. You're good here! Thanks arai and Waldo for the doc updates! Keywords: dev-doc-needed → dev-doc-complete
https://bugzilla.mozilla.org/show_bug.cgi?id=918987
CC-MAIN-2021-17
refinedweb
2,983
59.5
Draw a mesh. DrawMesh draws a mesh for one frame. The mesh will be affected by the lights, can cast and receive shadows and be affected by Projectors - just like it was part of some game object. It can be drawn for all cameras or just for some specific camera. Use DrawMesh in situations where you want to draw large amount of meshes, but don't want the overhead of creating and managing game objects. Note that DrawMesh does not draw the mesh immediately; it merely "submits" it for rendering. The mesh will be rendered as part of normal rendering process. If you want to draw a mesh immediately, use Graphics.DrawMeshNow. Because DrawMesh does not draw mesh immediately, modifying material properties between calls to this function won't make the meshes pick up them. If you want to draw series of meshes with the same material, but slightly different properties (e.g. change color of each mesh), use MaterialPropertyBlock parameter. Note that this call will create some internal resources while the mesh is queued up for rendering. The allocation happens immediately and will be kept around until the end of frame (if the object was queued for all cameras) or until the specified camera renders itself. See Also: MaterialPropertyBlock. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Mesh mesh; public Material material; public void Update() { // will make the mesh appear in the Scene at origin position Graphics.DrawMesh(mesh, Vector3.zero, Quaternion.identity, material, 0); } }
https://docs.unity3d.com/kr/2018.3/ScriptReference/Graphics.DrawMesh.html
CC-MAIN-2019-35
refinedweb
248
57.16
I’ve just spent an hour or so figuring out how to display an OpenCV image in a GTK+ 3 window that’s created through a Glade UI using Python 3. Since it’s not at all obvious even where to find the documentation, I’m writing it down here. Background – Python 3 and GTK+ Time was, to use GTK in Python you installed PyGTK. Those days are gone. What we have now is called GObject Introspection – or ‘gi’. What it does is pretty cool – it can expose any GObject-based library in Python. Any new GObject-based library that’s written is immediately available in Python. Just like that. What’s really really dumb about it is calling it ‘gi’. Try Googling that! So, here’s where the documentation is:. Once you’ve found the documentation, it’s pretty easy to use. Finding it is the hard part. So, show me how to do it Here’s code that takes an OpenCV feed from a webcam and displays it in a Glade UI. First, the Glade file: <?xml version="1.0" encoding="UTF-8"?> <!-- Generated with glade 3.18.3 --> <interface> <requires lib="gtk+" version="3.12"/> <object class="GtkWindow" id="window1"> <property name="can_focus">False</property> <signal name="delete-event" handler="onDeleteWindow" swapped="no"/> <child> <object class="GtkBox" id="box1"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkToggleButton" id="greyscaleButton"> <property name="label" translatable="yes">Greyscale</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <signal name="toggled" handler="toggleGreyscale" swapped="no"/> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkImage" id="image"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="stock">gtk-missing-image</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> </child> </object> </interface> The main thing to note here is that we’re using a GtkImage object to display the video. Each frame, we’ll replace the GtkImage’s image data with the frame from the camera. I’ve also added a button to switch between greyscale and colour. Note that the developers are all Americans and so spell ‘grey’ and ‘colour’ wrong. And here’s the Python code: import cv2 import numpy as np import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Gdk, GLib, GdkPixbuf cap = cv2.VideoCapture(1) builder = Gtk.Builder() builder.add_from_file("test.glade") greyscale = False class Handler: def onDeleteWindow(self, *args): Gtk.main_quit(*args) def toggleGreyscale(self, *args): global greyscale greyscale = ~ greyscale window = builder.get_object("window1") image = builder.get_object("image") window.show_all() builder.connect_signals(Handler()) def show_frame(*args): ret, frame = cap.read() frame = cv2.resize(frame, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC) if greyscale: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB) else: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) pb = GdkPixbuf.Pixbuf.new_from_data(frame.tostring(), GdkPixbuf.Colorspace.RGB, False, 8, frame.shape[1], frame.shape[0], frame.shape[2]*frame.shape[1]) image.set_from_pixbuf(pb.copy()) return True GLib.idle_add(show_frame) Gtk.main() Things to note here: - It’s quite important to handle the window’s delete_eventsignal. Otherwise it can be quite difficult to kill the program (Ctrl+C doesn’t work; try Ctrl+Z and then kill -9 %1). - I’m resizing the video to twice it’s native resolution. - To convert to greyscale, I first convert BGR to greyscale and then greyscale to RGB. GTK+ can apparently only handle the RGB colourspace, so you need to end up there one way or another. OpenCV natively generates BGR, not RGB, so even to display colour you need to do a conversion. - To get the data into a form that GtkImageunderstands, we first convert the numpy ndarrayto a byte array using .tostring(). We then use GdkPixbuf.Pixbuf.new_from_datato convert this to a pixbuf. The Falseargument is to say there is no alpha channel. 8is the only bit depth supported. frame.shape[1]is the image width and frame.shape[2]is the image height, and the last argument is the number of bytes in one row of the image (ie. the number of channels times the width in pixels). - We don’t display the pixbuf directly but instead display a copy of it. This gets around a wrinkle in the memory management which would otherwise require us to manually clean up the pixbuf object when we’re done with it. - The function gets called by the GTK idler; GLib.idle_add(show_frame)is adding the function to the list of functions called when idle. - You have to return Truefrom idle functions or they don’t get called again. That’s it!
https://nofurtherquestions.wordpress.com/
CC-MAIN-2018-34
refinedweb
814
51.65
import com.sleepycat.db.*; public interface DbAppendRecno { public abstract void db_append_recno(Db db, Dbt data, int recno); throws DbException; } public class Db { public void set_append_recno(DbAppendRecno db_append_recno) throws DbException; ... } When using the Db.DB_APPEND option of the Db.put method, it may be useful to modify the stored data based on the generated key. If a callback method is specified using the Db.set_append_recno method, it will be called after the record number has been selected, but before the data has been stored. The callback function must throw a DbException object to encapsulate the error on failure. That object will be thrown to caller of Db.put. The called function must take three arguments: a reference to the enclosing database handle; the data Dbt to be stored; and the selected record number. The called function may then modify the data Dbt. The Db.set_append_recno interface may be used only to configure Berkeley DB before the Db.open interface is called. The Db.set_append_recno method throws an exception that encapsulates a non-zero error value on failure.
http://doc.gnu-darwin.org/api_java/db_set_append_recno.html
CC-MAIN-2018-51
refinedweb
176
50.23
My first blog post from Word 2007 – let’s see how this goes. Introduction There are many features in ASP.NET that are unfortunately underused. Sometimes a feature gets looked over because it’s too complicated. Other times, like in the case of HttpHandlers, it’s because they are poorly understood. For the longest time I understood the concept and implementation of HttpHandlers, but I just couldn’t figure out under what circumstances I’d use them. Googling HttpHandlers it’s obvious to me that bad tech writers are squarely to blame. A shameful amount of examples are nothing more than “hello world.” The problem with such a limited example is that it leaves the reader thinking “so? I can do that with an aspx page!” Without understanding what problem space HttpHandlers are meant for, it’s impossible to get developers to use them. As an ASP.NET developer, HttpHandlers are important because they are the earliest possible point where you have access to requests. When a request is made to IIS for an ASP.NET resource (.aspx, .config, .asmx), the ASP.NET worker process internally creates an instance of the right HttpHandler for the request in question and effectively hands off the task of responding to the request. How does ASP.NET know which is the right HttpHandler for a given request? Simple, via configuration files, paths are mapped to http handlers. For example, if you open your machine.config file you’ll see a list of default mapping. For example: <add verb=”*” path=”*.aspx” type=”System.Web.UI.PageHandlerFactory” /> <add verb=”*” path=”*.config” type=”System.Web.HttpForbiddenHandler” /> <add verb=”*” path=”*.asmx” type=”System.Web.Services.Protocols.WebServiceHandlerFactory” /> So every time any .aspx page is requested, the PageHandlerFactory is left to fulfill the request. HttpHandlers can also be added or changed for specific sites in the web.config. Handlers aren’t just mapped to extensions, your own handler can be mapped to “HandlePingback.aspx”, in which case it, not the PageHandlerFactory, will be called upon. An HttpHandler is actually any class that implements the System.Web.IHttpHandler interface. To be of any use it needs to be mapped to a path. (I lie, PageHandlerFactory doesn’t implement IHttpHandler. Instead, it implements IHttpHandlerFactory. IHttpHandlerFactory defines a method named GetHandler which returns an IHttpHandler. We won’t cover IHttpHandlerFactories here, but it’s basically a layer between the internal ASP.NET process and the handoff to the HttpHandler. Either way, in the end you end up with a class that implements IHttpHandler). The IHttpHandler interfaces defines the very important and aptly named ProcessRequest. Basically, this is ASP.NET saying “hey you! Process this request!” Built-in Handlers If we look at the most important HttpHandler, the System.Web.UI.Page class (yes, the same one that all your pages inherit from), we really start to get a good feel for what an HttpHandler is responsible for. Looking at the internals of the Page class and starting from the ProcessRequest function, we quickly get to a ProcessRequestMain function which really starts to interact with stuff you do on a daily basis. Look at some of the stuff that happens in ProcessRequestMain: … base.InitRecursive(null); if (context1.TraceIsEnabled) { this.Trace.Write(“aspx.page“, “End Init“); } if (this.IsPostBack) { if (context1.TraceIsEnabled) { this.Trace.Write(“aspx.page“, “Begin LoadViewState“); } this.LoadPageViewState(); if (context1.TraceIsEnabled) { this.Trace.Write(“aspx.page“, “End LoadViewState“); this.Trace.Write(“aspx.page“, “Begin ProcessPostData“); } this.ProcessPostData(this._requestValueCollection, true); if (context1.TraceIsEnabled) { this.Trace.Write(“aspx.page“, “End ProcessPostData“); } } base.LoadRecursive(); … As you can see, it’s this method that’s responsible for causing all those ASPX events, such as OnInit and OnLoad, to be raised. In essence, the Page class does what it’s supposed to do: it’s handling the request. Another handler we saw listed above is the HttpForbiddenHandler (which is a straight handler as opposed to a HandlerFactory). A number of paths are mapped to this handler – generally files that might compromise a security risk if left publically accessible (like .config, .cs, .vb, .dll, …). The ProcessRequest for this handler is to the point: public void ProcessRequest(HttpContext context) { PerfCounters.IncrementCounter(AppPerfCounter.REQUESTS_NOT_FOUND); throw new HttpException(0×193, HttpRuntime.FormatResourceString(“Path_forbidden“, context.Request.Path)); } Why use a handler? There are likely few times where you have to use a handler. Almost anything you can do in a handler, you could simply create an aspx page to take care of. So why bother? There are two main reasons. First and foremost, HttpHandlers are far more reusable/portable than pages. Since there’s no visual element to an HttpHandler (no .aspx), they can easily be placed into their own assembly and reused from project to project or even sold as is. Secondly, the Page handler is relatively expensive. Going with the “Hello World” examples, if you do that in a page you’ll end up raising a number of events (onInit, onLoad, onPreRender, onUnload, …) and make use of a number of ASP.NET features such as viewstate and postback. In most cases, the performance hit is negligible, but it nonetheless highlights that you’re using the page framework when you have no need to. Real Examples The first example to look at is the TrackbackHandler than’s part of CommunityServer 1.1. If you go to and open 1.1/Blogs/Components/TrackbackHandler.cs you’ll see the relevant source code. The purpose of this handler is to track pingbacks made to blog entries. Most blog engines will automatically send a pingback to any linked posts. This means that blog engines must also have a way to capture these pingbacks and record them. There’s more or less a standard between how the communication is supposed to happen, but each blog engine is really on its own as far as implementation. Without spending too much time in the code, we can see that the handler looks for a number of POST parameters and creates the trackback based on what’s passed in. There’s absolutely no reason why all of this couldn’t be done using an ASPX page. But as I’ve already mentioned, that would force the entire ASPX page framework to be invoked. Additionally, this handler doesn’t even have a visual element – so a page doesn’t make too much sense. (you can look at the web.config to see how the handler’s added). Another example is my open source AMF.NET project which makes it possible for a Flash application to communicate with server-side ASP.NET code. The AmfGetwayHandler deserializes the AMF input (AMF is a proprietary binary protocol used by Flash), executes the right server side .NET function and returns a serialized response. Again, a single ASP.NET page could be used to accomplish the same thing, but then it would be impossible to package AMF.NET as a single assembly. Another common example you’ll run across is using HttpHandlers to generate RSS feeds. Many applications will map “Rss.aspx” to an HttpHandler which generates a XML feed. Why not to use HttpHandlers. IIS 7 promises to let us write ISAPI filters in .NET (or extend HttpHandlers beyond the ASP.NET pipeline depending on how you look at it), but that’s still a ways away. Thanks a lot for publishing it. God bless Still pretty foggy on the advantages and uses of an http handler. I’d say this article falls into the “bad tech writers are squarely to blame” catagorie. You’ve jumped from hello world to proprietary binary serialization. Very helpful infos about HttpHandlers. Thanks a lot for publishing it. Great article. Thanks for the information. Nice stuff… thanks for sharing… Thank Youuu plzz check my client code also….if client sends a string server is recieveing (NameValuePair) but if it sends a file it is not recieving… File input = new File(“XMLFile1.xml”); RequestEntity entity = new FileRequestEntity(input); PostMethod post = new PostMethod(url); post.setRequestEntity(entity); HttpClient httpclient = new HttpClient(); try { int result = httpclient.executeMethod(post); if (result >=200 && result <=210 ) { System.out.println(“Request Timed out”); System.err.println(“Method Failed:”post.getStatusLine()+ post.getStatusText());} System.out.println(“Response status code: “+result); System.out.println(post.getResponseBodyAsString()); } catch(Exception e) { System.out.println(“Response body: “); } finally { post.releaseConnection(); } I’m getting the response code as 200. but no files are uploaded at the server. i checked all permissions they are fine and firewall is turned on. plzz help out with this…….. thanq in advance Sucharitha: Your code works fine for me. Have you been able to step through the code and check for any exceptions? The only thing I can really think of is a filepermission error. Maybe you can try to save the file directly in the root of the website (for now), and see if that works. You might want to try using FileMon (google “FileMon”, it’s free) and seeing if there’s a permission problem. hey, please answer any question its very important to me. how server will save a file which is posted to it by the client in asp.net.? please do reply my other question is i wanted to send a file to server. i used post method to do so. the server should save that file. but this is not happening. i posted the file sucessfully. i got the response code as 200 but the file is not saved in specified directory. at server. in webform page_load event i wrote: Dim f As String Dim file For Each f In Request.Files.AllKeys file = Request.Files(f) file.SaveAs(“c:\test2\” & file.FileName) Next f but this code is not working. how can i save the incomming file? They go in …. You can only have 1 httphandler per type, so if you specify your own handler for .aspx files, the default handler will no longer process the page normally (which I can’t imagine is what you actually want). Not sure about your other question with respect to file uploads. I’m sorry…that some sentences are incomplete in my previous post. Actually I wanted to create a my own handler to process requests for .aspx pages. my doubt is ?or is there any particular location? where should i add in web.config file? where i want in because i’m getting an exception whenever a request is sent from client. & the other question is about whether i can save files sent by client with default Handler in pageFactoryHandler. if so, could you please tell me how a server will handle such type of requests. i have doubt regarding where to add in web.config file. i want to create a new handler for .aspx pages. & one more question about default handler in PageFactoryHandler saves the files to the specified directory sent by client. if so, could you please how server will handle such type of requests? thanku in advance I tried following mapping in webconfig-file inside system.web: ..and it works for me! / Banjo Good article. How can I ‘redirect’ (url rewrite) request for the httphandler ? Is it possible to do that url rewrite in webconfig file? How about those request querystring parameters which are not static? Thanks, Banjo i didn’t like your comments maybe i am idiot Stephen: Sounds right. It’s pretty much being used as a soap-less webservice at that point, which is fine. You end up having a custom service sitting on top of a very powerful multi threaded server (Windows 2003/IIS). Great article. I’ve been asked to write a web service that serves up multipart MIME messages – it’ll be consumed by a Java client that expects SwA attachments. I’m thinking that I could use a HTTP Handler to construct and return the multipart MIME response. Is this an appropriate use of HTTP Handlers? Richard: I can’t help but think a socket server would be better off handling this type of task. I even see something UDP based as being highly efficient and scalable and capable of circumventing firewalls. You might want to check out: along with the comments (especially Josh Twist’s post with the links). I’m pretty sure that ThreadPool in ASP.NET is per site/app domain…HttpHandlers are well below this – so yes, they would all share the same one. Nice article. But the problem I find with the ProcessRequest is that it fires more than once. Hi Karl, Great article. I’ve got a particular problem that I hope HTTPHandlers can help me to achieve. I want to mimic the Exchange 2003 SP2 Push Email feature for mobile devices. My research has led me to discover that MS do this with a ISAPI extension. Basically the WM device makes a HTTP/HTTPS call to this extension. The time-out of this request is set high. The extension then holds on to the request through-out this period until it finds that an email is available. If it does it just responds to the client. Otherwise it returns nothing before the time-out expires. Doing this is kind of resource intensive and bad news for ASP.NET. I’m not a ASP.NET developer but when I started to look at doing this with ASP.NET everything told me not to play with the thread-pool. This means I am restricted to 20 threads and that is no good for this app. Obviously I could increase the pool but I’m worried that performance will be severly effected. Ok, so the performance issue is never going to really go away, but are HTTPHandlers limited to the same thread-pool (My first guess is yes). I hope I’m wrong. Am I looking in the right place for this solution? TIA Richard hi karl this article was very helpful to me in understanding HTTPHandler Thanks Ankit Dennis: no. Handlers are meant to handle specific requests. What you’d be more interested in is an HttpModule. Except that’s only fed into the pipeline of files being served by the ASP.NET module (via extension mapping in IIS) – which may very well be good enough for you. If that’s what you want, then use an HttpModule and hook into the BeginRequest event…I’ve blogged about HttpModules as well. Until IIS 7.0 comes out, the only solution to filter ALL requests (images, .js, .css,…) is to use an ISAPI filter written in C++. Is it possible to intercept all http requests to an IIS site and determine the client browser with an HTTPHandler? Jack, not sure. You should get a IHttpHandler instance out of that (which is actually a page), which you should be able to call ProcessRequest on, which expects the HttpContext. If you’re handler is using GetCompiledPageInstance, why not just use the normal page framework? Without knowing more, it seems like you are using HttpHandlers for the wrong reasons. Also, from the MSDN documentation: “This method supports the .NET Framework infrastructure and is not intended to be used directly from your code.” I have PageParser.GetCompiledPageInstance. The result is that it returns a blank page? Why is that? how can i use HttpHandler in dot net nuke Seems to me that you you are trying to POST to a file that has a handler set up for GET only. From your explanation, I figured you were opening directly to a .doc file, so something like: By default .doc files aren’t handled by ASP.NET. Did you add a handler in your web.config or your machine.config? Are you using a different extension? What subfolder are your .doc files in? Even if I’m on the right track, it’s very odd because window.open(“xxx”) should be doing a GET…so I don’t see why you’d be getting back a POST. I’d love to see an example…you can zip something up and mail it to kseguin FuelIndustries . com Karl, the codebehind here means the .vb file of the aspx page. Currently I am not having any httphandler. The error is “Path ‘POST’ is forbidden”. I believe that this occurs because of trying to open a word document. If it is a html document i am not getting any error. I believe that you will be clear on the error.. Rajdev…I’m not reallly clear…you say you are doing this in codebehind…but I take it you mean within your httphandler/ What error are you getting specifically? Hi, I am trying to open a word document from code behind file. I am using Response.Write(““. But i am getting http handler error. Is there any way to solve the problem? It seems that we have to write our own handler. Is there any alternative method available? Thanks in advance Nice Comments Stefe: You normally set up your handler to handle a very specific request, say “rss.axd” or “imageGenerator.aspx”. At most you might map it to a full subfolder, like “ajax/*”. I’m not sure exactly when the HttpHandler is invoked with respect to the HttpModule. It’s possible that in BeginRequest the HttpHandler hasn’t been invoked yet (it’d be fairly easy to find out), so yes, you could circumvent it, but then your module will fire for every hit..so if you stop the httphandler chain, your .aspx handlers (PageFactoryHandler) will never fire and you’ll cripple your asp.net. yes, there might be ways around this (like checking the page url in BeginRequest and doing different stuff)..but why? Well, also with handlers you tipycally end up serving all requests … I mean, you typically register your handler to serve all aspx requests ending up obtaining the same result as for modules. What you say about having each request being finally served by a handler is generally true but not necessarily true. Suppose I have a module that on a request responds doing something and then rising the EndRequest event. That would end the request. And no handler has been invoked. Am I right? Steve, There is some similarity between the two. The main difference though is that IHttpHandlers handle specific requests, IHttpModules service all application requests. It’s probably possible to do everything in an HttpModule -but the request will still use an HttpHandler, so you still want to take advantage of a custom handler in some situation (i.e., you don’t necessarily always want to inherit all the stuff the default handler does). I’m kind of confused as to when use modules and when use handlers … by using module you should be able to do all that you can do with handlers, right??? Chris: You can remap .HTML files to be processed by ASP.NET. I’d then use an HttpHandler to URL Rewrite to an aspx page (static.aspx) which has your master page. Before URL Rewriting, store the actual requested page in the Context.Items collection, which you can retrieve from static.aspx to fetch the right file to drop inside the content placeholder. The only problem with this approach is that it’ll apply this change to all .html files. Maybe you can rename them to ashx? Or use .htm so that you can still use .html… Great Article. I had one question that somewhat relates to Stuart’s question. I want to go about creating a custom handler that will take *.html (i do have access to IIS), find the existing .html static page, place the master page on it and then feed it to the user. The reason i want to do this is i am currently in the process of converting a static .html website to dynamic asp.net. However, not all pages will be static (most won’t be at the start) and they will feel more comfortable continuing to generate static pages in the future. My hope was i could create this handler so that they could continue to generate static, plain, html pages that will inherit the MasterPage automatically. They won’t even need to use Visual Studio this way. Please let me know if you think i am going about this wrong. Any help would be appreciated. Zidane: It sounds like you want to read in an RSS Feed into .NET. Take a look at and i want to access RSS xml generated successfully from .net code. when i acces it from xmlreader it give me some qouate error however file hav not any sysntax error as it is working when placed on local path. i try to use the application configuration in iss to mapp .aspx to .rss extension and Httphandlers i didnt get how to do this Another limitation of HttpHandlers is that they don’t support the ASPCompat page attribute, which is key when interacting with legacy COM components. a newbie ? – problem with WebResource.axd >> maybe not entirely related to this post. anyway i don’t know why the server (on LOCAL (Development Server [internal VS2005 server] because after installing FrameWork2 my IIS5.1 doesn’t parse .aspx pages anymore!!) & tested on some free asp.net2 hosts) doesn’t inject the client scripts to the page needed to run functions like “WebForm_DoPostBackWithOptions” (but it injects Pictures like Minus & Plus used controls like TreeView). i googled & found that alot of people have the “no script injection by WebResource.axd” problem but no solution! any clue here ? Stuart: Not that I know of. It doesn’t make sense to do that anyways. The point of the HttpHandler is to be handle requests outside of the page framework. Master Pages are a part of the page framework – if you need them, you should just be using the default page handler. I’ve written an HTTPHandler that ouputs some HTML content, just a few links and some contact details. Is it possible to use a MasterPage.master with the resulting HTML? Nice article. ELMAH rocks we have our own flavor that is used in every application and has proven so valuable we have created our own entlib application block with modules and handlers at the base. Another unused 2.0 feature: HEALTH MONITORING Paul, not 100% sure. I’ve given it a quick try on a simplified form and everything works aok… You should take a look at: Also, is this converted from a 2003 project? Have you looked at the source code to see what javascript IS being generated by asp.net? Have you tried without cross-page posting (if you aren’t cross page posting, then you likely ARE converting from 2003 ‘cuz that can happen…) I’m still not sure I like the idea of using HttpHandlers for url rewriting. HttpModules seems better able to handle that… I was just wondering if this neat trick in ASP.Net 2.0: To force all documents to pass the authentication doesn’t also make it possible to let zip-files being passed through IIS and thus you own HttpHandler… Cheers, Wes The other thing I should mention is that the following javascript is missing from the form tag through the httpheader, but is present when going directly to the page: onsubmit=”javascript:return WebForm_OnSubmit();” I am in the process of trying to get a httphandler working for the purposes of URL re-writing. I have it to the point that it is successfully processing an aspx page and returning the html correctly, but not the scripts for postbacks. When viewing the headers using the ieHTTPHeaders utility, I find that when I view the target page directly, I see several headers such as the following that refer to WebResource.axd, but when looking at the source after re-writing through the httphandler, these are missing. GET /BrokerExtranet/WebResource.axd?d=PdBB8Jr4AQt4rLgwO2gTUA2&t=632737057228488874 HTTP/1.1 Accept: */* Referer: Accept-Language: en-au Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727) Host: test.pncs.com.au Connection: Keep-Alive Cookie: ASP.NET_SessionId=pozkme45kr1pob454owdjyis The end result is that trying to click my submit button I get the following error: Microsoft JScript runtime error: ‘WebForm_PostBackOptions’ is undefined with the code for my button: onclick=”javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("loginButton", "", true, "", "", false, false))” I have scoured the net and there are several references to this error being related to WebResource.axd, but none on how to solve the problem within a httphandler. Any help would be much appreciated! Travis: I probably gotta put HttpHandlers at, or very near, the top of the list. Quickly, others that come to mind (all of which I plan on blogging/writing about at some point) would be: -HttpModules -Custom Server Controls (we could lump in a lot of other OO features into that one) -Configuration sections -Event driven communication between pages/user controls -OnItemDataBound (less and less of an issue) -HttpContext.Current.Items I’m sure I could come up with some other big ones… “There are many features in ASP.NET that are unfortunately underused. Sometimes a feature gets looked over because it’s too complicated. Other times, like in the case of HttpHandlers, it’s because they are poorly understood.” What other features are you speaking of besides HTTPHandlers? Lee: If your challenge is unruly querystring’s, I think your best bet might be to use url rewriting in an HttpModule. Check out: for more details. Hi, all very interesting this…and mind blowing at times for somebody new like me! Here’s my question… I’m trying to optimise my dynamic (.aspx) site for search engines and know there is a way on Apache servers of configuring the HTACCESS file to run one file type as another. This example I’ve been given is for processing PHP extensions as HTML… AddType application/x-httpd-php .htm AddType application/x-httpd-php .html I’ve read about how the web.config file and http handlers can be configured to map any file extension to the appropriate handler. My site has lots of query strings and dynamically called content which is bad for search engine rankings. If I can implement a solution to ensure search engine spiders see my pages as static html pages, it will give me great advantages. But I need to keep my dynamic methods. So my question is – can ASP.NET on IIS be configured to effectively process .ASPX pages as though they are .HTML? In my opinion using the Transfer method to switch the execution to a custom handler doesn’t make much sense. If you’re trying to implement a sort of url mapping then use url rewriting. Otherwise don’t do a Transfer but just make a new request either using a Redirect or a ChildRequest from the code. I was under the impression that you couldn’t Server.Transfer from one type of handler to another. Check out this link for some more insight on this. OK. My Original web.config file had however when i was trying to use Server.Transfer it gave me error: “Error executing child request”. Which then made me add the other handler this made the Server.Transfer to work however it broke up something that was using the ajax handler.. how can i get both to work.? please help. Manpreet, I think I can’t understand what you’re trying to achieve… .aspx files are already mapped to the PageHandlerFactory class, why are you trying to remap them? “when i remove the first one then my reports does not work as the page uses Server.Transfer and gives me an error – Error executing child request” What does this mean? Hi Karl, I am trying to use http handlers but am really confused… i have put this in the web.config when this is the case my Ajax handler does not work….. when i remove the first one then my reports does not work as the page uses Server.Transfer and gives me an error – Error executing child request. how can i get both to work? any help would be greatly appreciated…. Regarding your closing comment… >>. << There’s an old trick that can really help in getting around this problem entirely and that is to rely on Request.PathInfo (a.k.a PATH_INFO server variable). What you need to do is generate a URL where your handler’s ASHX reference appears somewhere *before* the last component of the path. In other words, something like this: That’s a perfectly valid URL. When it gets cracked on the server, however, your HTTP handler that is mapped to the path "myhandler.ashx" will get called back and Request.PathInfo will contain "/report.zip" as the remainder path. Now you can do what you like in your handler, such as make that counter bump up, do authorization checks or what have you. Once you’re ready to send down the ZIP file, just call Response.TransmitFile and IIS will take care of the rest. You could alternatively stream the resource down from a database if that’s where it resides. To finish up, your handler should also set two response headers, namely "Content-Type" and "Content-Disposition". Taking the URL example above, these would read "application/zip" and "attachment; file=report.zip" respectively. The second header tells the user-agent/browser that the response entity is like an e-mail attachment and, if saved to disk, should be given the file name report.zip. So it’s like doing a rename on the client file system from the server-side. That’s it. Hope this helps. BTW, Scott Mitchell and I wrote an MSDN article [1] some time ago on the same essence as your piece here. There’s even a complete sample in there called ELMAH [2] that demonstrates adding error logging to a running ASP.NET application using modules and handlers and a binary deployment model only. The idea of ELMAH was to provide a fairly real-world example rather than, as you put it so finely :), yet another shameful “hello world” one. You’ll also see a usage of Request.PathInfo in there (see ErrorLogPageFactory.cs if you’re interested) though not exactly in the same context as outlined above. [1] [2] They are good for retrieving embedded resources too, especially when you have to do some work on them before to send them back through the pipeline, because the new ASP.NET 2.0 WebResource feature is not as flexible. I used this approach when building a web control library, to embed many Javascript scripts in one shot. The request is in the form: GET MyHandler.axd?q=script1.js$script2.js$scriptN.js This way I have been able to manage on the server-side the content to return, instead of writing a thousand times RegisterClientScriptResource, which in turn performs that thousand number of requests to the server (ok, it caches them…) In general, they are useful whenever you need to apply some personalization to the response sent to the client. Great post! I use handlers all the time. Very handy when working with non-html output. Good post. The lack of use of HttpHandlers (when appropriate) has always frustrated me. The Page class (.aspx) should only be used when you need the functionality it supplies (just like any class). Too many people do not know what is in their toolbox, so they use the same tool for every job. When you are just spitting out xml, or some binary data, or anything else that does not involve a tree of controls w/events/postback behavior, you should choose an HttpHandler over a Page. Finding Response.Write in a Page derived class is a good sign you should be using an HttpHandler instead (or studying up on ASP.NET’s use of Controls). We use them to prevent unauthorised access to the attachments on our site. I would have used an ISAPI filter but lifes a bit too short for that sort of thing We have a system that serves up different pages and one thing that we also did was create our own PageHandlerFactory that uses the PageParser.GetCompiledPageInstance which allows us to have different page instances based on particular master pages. What you basically end up with is a completely blank instance of the Page class and you can add a bunch of content controls to the Controls collection. Thanks for the example Jeff. I forgot the “IsReusable” property of the IHttpHandler…This tells ASP.NET whether an instance can be reused for another request. To be honest, I’m not 100% sure what determines whether you should return true or false. I’ve always read that you can generally return true. I assume you should only return false (like the Page class does) if you’re instance is pretty much becomes garbage after a single execution (an example that pops in my mind is if you also implement IDisposable or something) If you use the .ashx extension you are not required to register a new extension to asp.net. We use this extensively for our graphics such as .gif/.png/.jpg. Whenever we render a link to a graphic we append .ashx to the file path of the image. This allows us to do things like pass in a width and do a dynamic resize on that image. You just need to create an ashx file with the following at the top: < %@ WebHandler Language="C#" Class="StyleSheetHandler" %> Then just add a class like this: public class StyleSheetHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.Write(“Test”) } public bool IsReusable { get { return true; } } } This allows you to get around the pesky “extension register” on my hosts. uhhmm…not sure if I expected my formatted code to show up perfectly or not…
http://codebetter.com/karlseguin/2006/05/25/httphandlers-%e2%80%93-learn-them-use-them/
CC-MAIN-2014-42
refinedweb
5,650
67.25
xpacleanup man page XPACleanup: release reserved XPA memory Synopsis #include <xpa.h> void XPACleanup(void); Description When XPA is initialized, it allocates a small amount of memory for the access control list, temp directory path, and reserved commands. This memory is found by valgrind to be "still reachable", meaning that "your program didn't free some memory it could have". Calling the XPACleanup() routine before exiting the program will free this memory and make valgrind happy. See Also See xpa(n) for a list of XPA help pages Referenced By xpa(n). July 23, 2013 version 2.1.15 SAORD Documentation
https://www.mankier.com/3/xpacleanup
CC-MAIN-2017-26
refinedweb
101
57.87
FreeRTOS Task Task The implementation of the task has no return value as follows, and takes a pointer of type void as an argument. A task is implemented as an infinite loop, and when it ends, it explicitly deletes the task. void ATaskFunction( void *pvParameters ); void ATaskFunction( void *pvParameters ){ while(1){ // do something } } GR-ROSE is a single core, so only one task can be executed at a time. The running state is "Running", and the non-running states are "Blocked", "Ready" and "Suspended" as multiple states. Tasks are executed from the highest priority, and in order if they have the same priority, they are executed in order. If a higher priority task becomes "Ready" while executing a lower priority task, the task to be executed is switched. When processing such as waiting for vTaskDelay (), delay (), or Queue in a task, it will be in "Blocked" state. If the lower priority task is "Ready", it will be in "Running" state. In GR-ROSE SDK, the priority can be specified in 7 levels (0 to 6). The priority of the main task on which setup and loop are executed is 3. Based on the above, we will publish one sample. #include <Arduino.h> #include "FreeRTOS.h" #include "task.h" void task1(void *pvParameters); void task6(void *pvParameters); void setup(){ Serial.begin(9600); delay(3000); // wait for display serial monitor xTaskCreate( task1, "TASK1", 512, NULL, 1, NULL ); xTaskCreate( task6, "TASK6", 512, NULL, 6, NULL ); } void loop(){ static uint32_t ctime = millis(); if((millis() - ctime) > 5000){ delay(1000); // entering Block State } } void task1(void *pvParameters){ while(1){ Serial.println("task1 running"); delay(1000); // entering Blocked State } } void task6(void *pvParameters){ while(1){ Serial.println("task6 running"); delay(1000); // entering Blocked State } } The above sample generates task 1 with priority 1 and task 6 with priority 6. It is a task which performs serial output every one second each. After 5 seconds, delay () is executed in the priority 3 loop and it becomes "Blocked". When it is executed, it will be displayed on the serial monitor as follows. The function "setup" creates task1 and task6, but only task6 which has higher priority than loop is executed for 5 seconds. After 5 seconds, execute delay() in loop and it will be in "Blocked" state, so it can be seen that task1 is also executed. task6 running task6 running task6 running task6 running task6 running task6 running task1 running task6 running task1 running task6 running task1 running task6 running task1 running Here is an example of deleting a task: The periodic task is executed every second, but when it is executed 5 times, vTaskDelete is executed and the task is deleted. The task consumes HEAP memory as shown in Overview, so delete it when it is not needed. #include <Arduino.h> #include "FreeRTOS.h" #include "task.h" void periodic(void *pvParameters); TaskHandle_t periodicHandle = NULL; void setup() { // put your setup code here, to run once: Serial.begin(9600); delay(1000); xTaskCreate(periodic, "PERIODIC", 512, NULL, 2, &periodicHandle); } void loop() { // put your main code here, to run repeatedly: delay(1); } void periodic(void *pvParameters) { while (1) { static int count = 0; Serial.println(count); delay(1000); count++; if(count == 5){ vTaskDelete( periodicHandle ); } } }
https://www.renesas.com/tw/zh/products/gadget-renesas/reference/gr-rose/rtos-task.html
CC-MAIN-2019-43
refinedweb
532
54.22
Configuration Options in Code or INI Files? Recently I was admonishing my colleagues for embedding configuration options into Python code. I thought it was commonly accepted that all configuration options should be in text files that are not code. Typically in Python world this seems to be something like INI files, which are parsed to get to the values. Nowadays JSON could also be an option. After polling the opinions of some other colleagues, friends and others, it seems the situation is not as clear as I thought. I think I have personally embedded configuration options in one case in production code, and felt dirty afterwards. But maybe I have just been living in the past, thinking the thoughts of a C/C++ programmer… Some of the first things that come to my mind when thinking why configuration options should be in regular text files: - Text (INI format especially) is easier to edit than (Python) code - It is easier to reload configuration options from a regular text file - Code files should be read only - If you make a mistake in a code file, you will get an error that only programmers will understand - Loading configuration options from a regular text file offers better security The counter arguments to those could include: - It is actually simpler to write complex structures in Python than INI, and you can use all of Python’s power - It can be pretty easy to reload Python modules as well (although you have to be careful with imports). Often there is no need to reload options. - Of course the code file would be made writable for only a brief period of time - If the target audience of the program are programmers, it makes perfect sense to show errors only programmers could understand. And you could even catch some errors and give nicer messages to end users. - The configuration file can be edited only by the super users (or the legitimate intended users of the program, and the program can not grant them more power than they would otherwise have) I can think of a bunch more reasons but they all seem to follow pretty much the same pattern, with same pattern of counter arguments. So if the intended users of the application were software engineers, systems administrators or others capable of deciphering Python tracebacks, and the program couldn’t grant the user more access to the system than they would otherwise have, I guess I don’t have a strong case against config-in-code. But in most use cases I think the security implications are actually a pretty strong reason why config should be separate from code. You can lock down code as tightly as your system allows, and you can be as strict about parsing and loading values from text config file as you like. This doesn’t eliminate all security issues from config files, but it adds a layer of protection. bc: Get (mostly) the best of both worlds: use YAML. I’m not sure if you can lock down YAML from a security POV though.December 8, 2008, 1:58 am JWK: A compromise i like is first check for the configuration file / environment value. If it is not there so use the internal value and add a log entry for that action. At least you can trace that behavior.December 8, 2008, 2:36 am bc: OK, I checked. PyYAML has a “safe_load” function which will only create basic types, hence avoiding the security-risk of loading an untrusted file which could create a malicious object.December 8, 2008, 5:08 am James Thiele: -1 on config options in code. Big downside is merging changes. Alice writes a program and gives copies to Bob, Carol, Dan and Ellen. They all edit their copies for personal config options. Alice finds a bug and sends out a new version. Now everybody has to do a diff/patch cycle. If the config options were in a seperate file they’d just drop in Alice’s new code.December 8, 2008, 6:42 am mkay: +1 on options in code Most “options” aren’t really options at the office. Some, like the Java class we should use to do X don’t have any other values, but the argument is that “someday we’ll need that”. Usually they are instances of over-engineering. I opt for the opposite and write straight-forward code. When something needs to become an option, then I move to a configuration file. Anything outside of the code needs a *lot* more documentation, maintenance, QA, etc.December 8, 2008, 6:50 am Marcos Dione: ini files are really the wrong solution for complex things. see, config files are mostly a way to reprogram a certain tool; it’s like the switches in the first programmable computers. implementing complex decisions with switches is difficult. for those cases I prefer coding your own decisions. an example I’m toying with right now is a email server; it’s routing table is complex enough to be more simple to implement by code rather than switches.December 8, 2008, 7:51 am Kevin: It’s python, do both. Write your INI in Python – keep the config separate from the code, but also keep it in code. Just import the config file. For long running daemons you can attempt to reload the file every x minutes or some such. This allows you to update your config without touching the application code, but gives you the power of Python for storing your config info in Dicts and such.December 8, 2008, 9:11 am Cory: I think you’re missing the most important reasons to use ini files, although you kind of dance around it. The reason not to use code is because ini files are not code. This is important. INI files are a declarative syntax; they cannot have ifs or loops. Files that have a pure-declarative syntax have the following advantages: – they can be losslessly transformed into another format (including code). Code files do not have this advantage; there’s always the risk that they contain an if block, which means you need an interpreter with a loaded namespace and runtime environment to understand them. – they can be machine-edited. Code can’t be machine edited in the general case, see above. You will want this if your package needs to be packaged in a .deb or .rpm for example; upgrade scripts take care of making edits around your users’ changes. – They can be safely transferred (provided you filter out passwords). Sending code over a wire is risky to the person receiving it, who must then interpret it. Declarative ini files aren’t code, and can be safely parsed.December 8, 2008, 9:41 am sage: I definitly use yaml too instead of ini files.December 9, 2008, 12:32 am yaml is easier to read than json (while every json file should be read by a yaml parser too) yaml is easier to abstract complex structure than ini files. zaur: Last time I been considering a Python Object Notation (PyON). It bases on python syntax. It could be considered as readable reconstructable representation for python objects. It dosn’t use eval/exec for reconstruction; it bases on AST interpretation. I think it could be usefull for configuration purposes. P.S. PyON project is very young but you already able to do interesting things 🙂December 19, 2008, 10:53 am
https://www.heikkitoivonen.net/blog/2008/12/07/configuration-options-in-code-or-ini-files/comment-page-1/
CC-MAIN-2020-10
refinedweb
1,245
61.77
YARD: Yay! A Ruby Documentation Tool IRC: irc.freenode.net / #yard Git: Author: Loren Segal Contributors: License: MIT License Latest Version: 0.8.7.3 Release Date: November 1st 2013 Synopsis. Feature List 1. RDoc/SimpleMarkup Formatting Compatibility: YARD is made to be compatible with RDoc formatting. In fact, YARD does no processing on RDoc documentation strings, and leaves this up to the output generation tool to decide how to render the documentation. 2. Yardoc Meta-tag Formatting Like Python, Java, Objective-C and other languages: YARD uses a '@tag' style definition syntax for meta tags alongside regular code documentation. These tags should be able to happily sit side by side RDoc formatted documentation, but provide a much more consistent and usable way to describe important information about objects, such as what parameters they take and what types they are expected to be, what type a method should return, what exceptions it can raise, if it is deprecated, etc.. It also allows information to be better (and more consistently) organized during the output generation phase. You can find a list of tags in the Tags.md file. YARD also supports an optional "types" declarations for certain tags. This allows the developer to document type signatures for ruby methods and parameters in a non intrusive but helpful and consistent manner. Instead of describing this data in the body of the description, a developer may formally declare the parameter or return type(s) in a single line. Consider the following method documented with YARD formatting: # Reverses the contents of a String or IO object. # # @param [String, #read] contents the contents to reverse # @return [String] the contents reversed lexically def reverse(contents) contents = contents.read if contents.respond_to? :read contents.reverse end With the above @param tag, we learn that the contents parameter can either be a String or any object that responds to the 'read' method, which is more powerful than the textual description, which says it should be an IO object. This also informs the developer that they should expect to receive a String object returned by the method, and although this may be obvious for a 'reverse' method, it becomes very useful when the method name may not be as descriptive. 3. Custom Constructs and Extensibility of YARD: YARD is designed to be extended and customized by plugins. Take for instance the scenario where you need to document the following code: class List # Sets the publisher name for the list. cattr_accessor :publisher end This custom declaration provides dynamically generated code that is hard for a documentation tool to properly document without help from the developer. To ease the pains of manually documenting the procedure, YARD can be extended by the developer to handle the cattr_accessor construct and automatically create an attribute on the class with the associated documentation. This makes documenting external API's, especially dynamic ones, a lot more consistent for consumption by the users. YARD is also designed for extensibility everywhere else, allowing you to add support for new programming languages, new data structures and even where/how data is stored. 4. Raw Data Output: YARD also outputs documented objects as raw data (the dumped Namespace) which can be reloaded to do generation at a later date, or even auditing on code. This means that any developer can use the raw data to perform output generation for any custom format, such as YAML, for instance. While YARD plans to support XHTML style documentation output as well as command line (text based) and possibly XML, this may still be useful for those who would like to reap the benefits of YARD's processing in other forms, such as throwing all the documentation into a database. Another useful way of exploiting this raw data format would be to write tools that can auto generate test cases, for example, or show possible unhandled exceptions in code. 5. Local Documentation Server: YARD can serve documentation for projects or installed gems (similar to gem server) with the added benefit of dynamic searching, as well as live reloading. Using the live reload feature, you can document your code and immediately preview the results by refreshing the page; YARD will do all the work in re-generating the HTML. This makes writing documentation a much faster process. Installing To install YARD, use the following command: $ gem install yard (Add sudo if you're installing under a POSIX system as root) Alternatively, if you've checked the source out directly, you can call rake install from the root project directory. Important Note for Debian/Ubuntu users: there's a possible chance your Ruby install lacks RDoc, which is occasionally used by YARD to convert markup to HTML. If running which rdoc turns up empty, install RDoc by issuing: $ sudo apt-get install rdoc Usage There are a couple of ways to use YARD. The first is via command-line, and the second is the Rake task. 1. yard Command-line Tool YARD comes packaged with a executable named yard which can control the many functions of YARD, including generating documentation, graphs running the YARD server, and so on. To view a list of available YARD commands, type: $ yard --help Plugins can also add commands to the yard executable to provide extra functionality. Generating Documentation The yardoc executable is a shortcut for yard doc. The most common command you will probably use is yard doc, or yardoc. You can type yardoc --help to see the options that YARD provides, but the easiest way to generate docs for your code is to simply type yardoc in your project root. This will assume your files are located in the lib/ directory. If they are located elsewhere, you can specify paths and globs from the commandline via: $ yardoc 'lib/**/*.rb' 'app/**/*.rb' ...etc... The tool will generate a .yardoc file which will store the cached database of your source code and documentation. If you want to re-generate your docs with another template you can simply use the --use-cache (or -c) option to speed up the generation process by skipping source parsing. YARD will by default only document code in your public visibility. You can document your protected and private code by adding --protected or --private to the option switches. In addition, you can add --no-private to also ignore any object that has the @private meta-tag. This is similar to RDoc's ":nodoc:" behaviour, though the distinction is important. RDoc implies that the object with :nodoc: would not be documented, whereas YARD still recommends documenting private objects for the private API (for maintainer/developer consumption). You can also add extra informative files (README, LICENSE) by separating the globs and the filenames with '-'. $ yardoc 'app/**/*.rb' - README LICENSE FAQ If no globs precede the '-' argument, the default glob ( lib/**/*.rb) is used: $ yardoc - README LICENSE FAQ Note that the README file can be specified with its own --readme switch. You can also add a .yardopts file to your project directory which lists the switches separated by whitespace (newlines or space) to pass to yardoc whenever it is run. A full overview of the .yardopts file can be found in YARD::CLI::Yardoc. Queries The yardoc tool also supports a --query argument to only include objects that match a certain data or meta-data query. The query syntax is Ruby, though a few shortcuts are available. For instance, to document only objects that have an "@api" tag with the value "public", all of the following syntaxes would give the same result: --query '@api.text == "public"' --query 'object.has_tag?(:api) && object.tag(:api).text == "public"' --query 'has_tag?(:api) && tag(:api).text == "public"' Note that the "@tag" syntax returns the first tag named "tag" on the object. To return the array of all tags named "tag", use "@@tag". Multiple --query arguments are allowed in the command line parameters. The following two lines both check for the existence of a return and param tag: --query '@return' --query '@param' --query '@return && @param' For more information about the query syntax, see the YARD::Verifier class. 2. Rake Task The second most obvious is to generate docs via a Rake task. You can do this by adding the following to your Rakefile: YARD::Rake::YardocTask.new do |t| t.files = ['lib/**/*.rb', OTHER_PATHS] # optional t. = ['--any', '--extra', '--opts'] # optional end both the files and options settings are optional. files will default to lib/**/*.rb and options will represents any options you might want to add. Again, a full list of options is available by typing yardoc --help in a shell. You can also override the options at the Rake command-line with the OPTS environment variable: $ rake yard OPTS='--any --extra --opts' 3. yri RI Implementation The yri binary will use the cached .yardoc database to give you quick ri-style access to your documentation. It's way faster than ri but currently does not work with the stdlib or core Ruby libraries, only the active project. Example: $ yri YARD::Handlers::Base#register $ yri File.relative_path Note that class methods must not be referred to with the "::" namespace separator. Only modules, classes and constants should use "::". You can also do lookups on any installed gems. Just make sure to build the .yardoc databases for installed gems with: $ sudo yard gems If you don't have sudo access, it will write these files to your ~/.yard directory. yri will also cache lookups there. 4. yard server Documentation Server The yard server command serves documentation for a local project or all installed RubyGems. To serve documentation for a project you are working on, simply run: $ yard server And the project inside the current directory will be parsed (if the source has not yet been scanned by YARD) and served at. Live Reloading If you want to serve documentation on a project while you document it so that you can preview the results, simply pass --reload ( -r) to the above command and YARD will reload any changed files on each request. This will allow you to change any documentation in the source and refresh to see the new contents. Serving Gems To serve documentation for all installed gems, call: $ yard server --gems This will also automatically build documentation for any gems that have not been previously scanned. Note that in this case there will be a slight delay between the first request of a newly parsed gem. 5. yard graph Graphviz Generator You can use yard graph to generate dot graphs of your code. This, of course, requires Graphviz and the dot binary. By default this will generate a graph of the classes and modules in the best UML2 notation that Graphviz can support, but without any methods listed. With the --full option, methods and attributes will be listed. There is also a --dependencies option to show mixin inclusions. You can output to stdout or a file, or pipe directly to dot. The same public, protected and private visibility rules apply to yard graph. More options can be seen by typing yard graph --help, but here is an example: $ yard graph --protected --full --dependencies Changelog November.1.13: 0.8.7.3 release -)..
https://www.rubydoc.info/gems/yard/0.8.7.4/frames
CC-MAIN-2018-22
refinedweb
1,855
61.56
Net Present Value Profitability Accounting Essay Published: Last Edited: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. B ) Net Present Value What is Net present value ? Net present value is difference between the present value ( pv ) of the future cash flows from an investment and the amount of investment. Present value of the expected cash flow is computed by discounting them at the required rate of return. This technique ,like internal rate of return which follows, brings together the concept of discounting and the weighted average cost of funds. The former is required to adjust for the time value of money while the letter provides an interest rate or discount rate to apply to the future cash flows. Consider the project which we have already evaluated using payback and accounting rate of return. Besides that npv recognises that £1 today is worth more than £1 tomorrow. For simply example, an investment of £1000 today at 10 percen will yield £1100 at the end of the year; therefore, the present value of £1100 at the desired rate of return ( 10 percen )is £1000. The amount of investment ( £1000 in this example ) is deducted from this figure to arriveat npv which here is zero ( £1000 - £1000 ). A zero npv mean the project repays original investment plus the required rate of return. A positive means a better return, and a negative npv means a worse return, than the return from zero npv. It is one of the two discounted cash flow ( DCF ) technique ( the other is internal rate of return ) used in comparative appraisal of investment proposals where the flow of income varies over time. On the other hands, the projects in conditions where all worthwhile project can be accepted it maximises shareholder utility. Projects with positive npvs should be accepted since they increase shareholder wealth, while those with negative npvs decrease shareholder wealth. There are many advantages if a firm use the NPV method in a investment. For example, with used this method its can tells whether the investment will increase the firm's value. Besides that, the firm able to considers the time value of money, considers all the cash flows, and Considers the risk of future cash flows (through the cost of capital). Furthermore NPV is essential for financial appraisal of long-term projects, it measures the excess or shortfall of cash flows, In the NPV model it is assumed to be reinvested at the discount rate used. This is appropriate in the absence of capital rationing. NPV method also indicates whether a proposed project will yield the investor's required rate of return and consider both magnitude and timing of cash flows. Another advantages used this method were the firm will be able to consistent with shareholder wealth maximization ( added net present values ) its will generated by investments are represented in higher stock prices. A ) Profitability Profitability is the primary or main goal of all business ventures weather produce a good or services. Without profitability the business will not survive in the long run. So measuring current and past profitability and projecting future profitability is very important to know how long firma or business can survive and its generate profit or loss. We can define profitability as measuring that find out the difference between income and expenses. Income is money generated the activities of business while expenses is cash outflows that used to produce the product or expenses which relating the business such as administration, salary staff, heating and lighting cost, shipping and so on. In other meaning of words expenses are the cost of resources used up or consumed by the activities of the business. For example, The management committee of King`s College is considering a proposal by the catering manager, Mr. Steven Cook to close out- dated dining hall and to replace it with new self service canteen. It is show that the resources such as a equipments or furniture whose useful life is more than one year is used up over a period of years, thus its take long time to determine the real profit or loss on the equipments and furniture are used. However, the committee or Governor of proposal can evaluate the proposal with determine time of period six months or a year to look it will comes benefit or loss on the business. From this example, we can look the cash inflows and cash outflows to running this proposal. There are many cash inflows such as sales of goods £ 164 000 per year, disposal asset (equipments and furniture ), and sold of assets. Besides that, there are many cash outflows which can see from the proposal such as salary ( staffs and manager ), Variable cost, fixed cost, depreciation, and others. Thus, the manager of proposal will estimate profitability through time of period following by the year. Via time of period that was prescribed its will show the profit or loss every year.To know that this proposal will be implement or not, its able to detect via estimate this method. If this method show that the value NPV is positive or the value more than or equal zero will can accept this proposal. But, while the value of NPV is negative or less than zero the Governor should be reject this project. In conclusion, the manager or Governor have a choice between accept or reject this proposal. If this project should be to implement for facility students and staffs. It have to evaluate again to avoid from loss or negative impact for students, staffs, committee of college and business. Briefly discuss any non-financial factors which you consider should be borne in mind by the Governors in this case ? The management committee of King`s college of Further Education was considering a proposal by the catering manager, Mr. Steven Cook ,to close the existing out-dated dining hall that has been used since the college was opened in the late 1960s and to replace it with a new self service canteen offering a wide variety of good quality meals. To discuss any non-financial factor which can to consider this project, there are some factor that will be discuss. - Do the survey to student or customers . In some cases, non-financial factor may be essential requirements in consider a proposal to implement . In this case the Governors should be do some survey on their students or customer to know what is the best way in order to meet need and customer needs. Therefore , the manager can do the best planning to implement this proposal that will be give the good impact on successful this project because the manager try to understand or study the market . - Improving staff morale, making it easier to recruit and retain employees. Besides that , with do the seminar or motivation by the management committee of King`s College. It can improve staff morale, skills , capability , competencies ,making easier to recruit and retain the employees . With follow this ways the governor can reduce or no need to take the part time staff to running their business. Furthermore, the manager can reduce the cost to implement this proposal . - Study more detail on the proposal or project. Before implement the proposal the manager or staff should be sit down and do the meeting dragged on to think the best way to achieve the main objectives or aims successfully. The Governor could be give the suggestions that can give the benefits not only the company but also to student. For example, the committee can do the best way to teach their student to take care about cleanliness. - Location In this part, location is one of main factor that need to study more detail by the manager. If the Governor or manager of proposal chose the wrong location, it can be negative impact on college and students. Unsuitable and uncomfortable location will be not contribute anything to college and students and the worse thing is it make loss to the company. - External factor External factor also one of the non-financial factors which determine the successful the proposal or project. Without to determine the right external factor, its will be arise competition with the another competitor. Thus the Governor must think the best things to avoid the external factors. REFRENCES Dr. Mohd Amy Azhar Mohd Harif, Ph.D, C.A.(Mt) ( 2007 ) financial and non-fanancial factors that contribute to the franchisee. June page 6-9. Glen Arnold (2007) Essentials of Corporate Financial Management, Chapter 2-3 page 35- 117
https://www.ukessays.com/essays/accounting/net-present-value-profitability-accounting-essay.php
CC-MAIN-2017-13
refinedweb
1,429
59.74
import csv file: integers appearing as floating points Hi, I am trying to import a csv file in order to create an adjacency matrix and then plot the associated directed graph. I upload the csv file and use its content to generate a matrix as follows sage: import csv sage: data = list(csv.reader(file(DATA+'matrix23.csv'))) sage: m = matrix([[ float(_) for _ in line] for line in data]) sage: m Whereas my csv file only contains integers 1's and 0's, the generated matrix is floating point 1.0's and 0.0's! Note: the CSV file was created using MS Excel with cells formated to zero decimal places. I'm new to both python and sage and can't find anything on the forums to help
https://ask.sagemath.org/question/10141/import-csv-file-integers-appearing-as-floating-points/
CC-MAIN-2018-09
refinedweb
132
60.14
Base class for originators of RIP route entires. More... #include <route_entry.hh> Base class for originators of RIP route entires. This class is used for storing RIPv2 and RIPng route entries. It is a template class taking an address family type as a template argument. Only IPv4 and IPv6 types may be supplied. Associate route with this RouteEntryOrigin. Dissociate route from this RouteEntryOrigin. Retrieve number of seconds before routes associated with this RouteEntryOrigin should be marked as expired. A return value of 0 indicates routes are of infinite duration, eg static routes. Implemented in PeerRoutes< A >, Peer< A >, RedistRouteOrigin< A >, Peer< IPv4 >, PeerRoutes< IPv4 >, RedistRouteOrigin< IPv6 >, and RedistRouteOrigin< IPv4 >. Find route if RouteOrigin has a route for given network.
http://xorp.org/releases/current/docs/kdoc/html/classRouteEntryOrigin.html
CC-MAIN-2017-17
refinedweb
119
52.15
Content-type: text/html msgb - Defines a STREAMS message block #include <sys/stream.h> struct msgb { struct msgb *b_next; struct msgb *b_prev; struct msgb *b_cont; unsigned char *b_rptr; unsigned char *b_wptr; struct datab *b_datap; MSG_KERNEL_FIELDS }; The msgb structure defines a message block. A message block carries data or information in a stream. A STREAMS message consists of message blocks linked through b_cont. Each message block points to a data block descriptor, which in turn points to a data buffer. The msgb structure is typedefed as mblk_t. The associated data block is stored in a datab structure, which is typedefed as dblk_t. The datab structure is defined (in sys/stream.h) as: struct datab { struct datab * db_freep; unsigned char * db_base; unsigned char * db_lim; unsigned char db_ref; unsigned char db_type; unsigned char db_class; unsigned char db_pad[1]; }; The datab fields are defined as follows: Messages are typed according to the value in the db_type field in the associated datab structure. Some possible type values are: As part of its support for STREAMS, Tru64 UNIX provides the following interfaces for exchanging messages betweens STREAMS modules on the one hand and sockets and network protocols on the other: mbuf_to_mblk() - Converts an mbuf chain to an mblk chain mblk_to_mbuf() - Converts an mblk chain to an mbuf chain delim off
http://backdrift.org/man/tru64/man4/msgb.4.html
CC-MAIN-2017-04
refinedweb
214
58.92
Ilya SikharulidzePro Student 1,072 Points Can someone please help me, I don't understand how to use 'return' value in this case using System; class Program { static string CheckSpeed(double speed) { if(speed > 65.0) { Console.WriteLine("too fast"); } else if(speed < 45.0) { Console.WriteLine("too slow"); } else { Console.WriteLine("speed OK"); }// YOUR CODE HERE } static void Main(string[] args) { // This should print "too slow". Console.WriteLine(CheckSpeed(44)); // This should print "too fast". Console.WriteLine(CheckSpeed(88)); // This should print "speed OK". Console.WriteLine(CheckSpeed(55)); } } 2 Answers Steven Parker181,132 Points You're pretty close there. But instead of outputting the strings directly (using " Console.WriteLine"), your code should return the string values (using " return"). Steven Parker181,132 Points Where you currently have the term " Console.WriteLine" (in 3 places), replace it with the word " return". Optionally, you can also remove the parentheses around the string (replacing the open parenthesis with a space). Steven Parker181,132 Points Ilya Sikharulidze — Glad to help. You can mark a question solved by choosing a "best answer". And happy coding! Ilya SikharulidzePro Student 1,072 Points Ilya SikharulidzePro Student 1,072 Points Thanks, I understand that, but I don't understand how to exactly to implement the "return"
https://teamtreehouse.com/community/can-someone-please-help-me-i-dont-understand-how-to-use-return-value-in-this-case
CC-MAIN-2020-10
refinedweb
208
68.77
On 3/28/2012 8:16, Michael Poeltl wrote: > yeah - of course 'while True' was the first, most obvious best way... ;-) > but I was asked if there was a way without 'while True' > and so I started the 'recursive function' > > and quick quick; RuntimeError-Exception -> not thinking much -> just adding > two zeros to the default limit (quick and dirty) -> segfault ==> subject: python segfault ;-) You give up too easily! Here's another way: ---> def get_steps2(pos=0, steps=0, level = 100): if steps == 0: pos = random.randint(-1,1) if pos == 0: return steps steps += 2 pos += random.randint(-1,1) if level == 0: return (pos, steps) res = get_steps2(pos,steps, level-1) if not isinstance(res, tuple): return res return get_steps2(res[0], res[1], level-1) import random for i in range(200): print ( get_steps2() ) print("done") input("") <--- Now the limit is 1267650600228229401496703205376. I hope that's enough. Kiuhnm
https://mail.python.org/pipermail/python-list/2012-March/621927.html
CC-MAIN-2019-13
refinedweb
150
67.28
Using a drop down list to change a subitem Sometimes, instead of allowing the user to arbitrarily change the value of an item, you want to present the user with a set of choices. You can do this by bringing up a drop down list instead of an edit control. To implement this, we follow a pattern very similar to that used for editable subitems. Step1: Derive a class from CListCtrlDerive a new class from CListCtrl or make the modification to an existing sub-class. If you are already using a class for editable subitems as described above, you can use that class. Step 2: Define HitTestEx()Define an extended HitTest function for the CMyListCtrl class. This function will determine the row index that the point falls over and also determine the column. The HitTestEx() has already been listed in an earlier section. We need this function if the user interface to initiate the edit is a mouse click or a double click. See the section "Detecting column index of the item clicked". Step 3: Add function to create the Drop Down ListThis function is similar to the function EditSubLabel() described in the previous section. The difference is that, at the end, it creates a combobox from the CInPlaceList class. Note that it also requires a list of strings as an argument. This list is used to populate the drop down list. The last argument is the index of the item that should be initially selected in the drop down list. // ShowInPlaceList - Creates an in-place drop down list for any // - cell in the list view control // Returns - A temporary pointer to the combo-box control // nItem - The row index of the cell // nCol - The column index of the cell // lstItems - A list of strings to populate the control with // nSel - Index of the initial selection in the drop down list CComboBox* CMyListCtrl::ShowInPlaceList( int nItem, int nCol, CStringList &lstItems, int nSel ) { // The returned pointer should not be saved // Make sure that the item is visible if( !EnsureVisible( nItem, TRUE ) ) return NULL; // Make sure that nCol is valid CHeaderCtrl* pHeader = (CHeaderCtrl*)GetDlgItem(0); int nColumnCount = pHeader->GetItemCount(); if( nCol >= nColumnCount || GetColumnWidth(nCol) < 10 ) return NULL; // Get the column offset int offset = 0; for( int i = 0; i < nCol; i++ ) offset += GetColumnWidth( i );; } rect.left += offset+4; rect.right = rect.left + GetColumnWidth( nCol ) - 3 ; int height = rect.bottom-rect.top; rect.bottom += 5*height; if( rect.right > rcClient.right) rect.right = rcClient.right; DWORD dwStyle = WS_BORDER|WS_CHILD|WS_VISIBLE|WS_VSCROLL|WS_HSCROLL |CBS_DROPDOWNLIST|CBS_DISABLENOSCROLL; CComboBox *pList = new CInPlaceList(nItem, nCol, &lstItems, nSel); pList->Create( dwStyle, rect, this, IDC_IPEDIT ); pList->SetItemHeight( -1, height); pList->SetHorizontalExtent( GetColumnWidth( nCol )); return pList; } Step 4: Handle the scroll messagesThe CInPlaceList class is designed to destroy the drop down list control and delete the object when it loses focus. Clicking on the scrollbars of the list view control does not take away the focus from the drop down list control. We therefore add handlers for the scrollbar messages which force focus away from the drop down list control by setting the focus to the list view control itself. void CMyListCtrl::OnHScroll(UINT nSBCode, UINT nPos, CScrollBar* pScrollBar) { if( GetFocus() != this ) SetFocus(); CListCtrl::OnHScroll(nSBCode, nPos, pScrollBar); } void CMyListCtrl::OnVScroll(UINT nSBCode, UINT nPos, CScrollBar* pScrollBar) { if( GetFocus() != this ) SetFocus(); CListCtrl::OnVScroll(nSBCode, nPos, pScrollBar); } Step 5: Handle EndLabelEditLike the built in edit control, our drop down list control also sends the LVN_ENDLABELEDIT notification when the user has selected an item. If this notification message isnt already being handled, add a handler so that any changes can be accepted.); } *pResult = FALSE; } Step 6: Add means for the user to initiate the editThe sample code below is the handler for the WM_LBUTTONDOWN message. It creates a drop down list when the user clicks on a subitem after the item already has the focus. The code checks for the LVS_EDITLABELS style before it creates the drop down list. Of course, this is a very simplistic implementation and has to be modified to suit your needs. void CMyListCtrl::OnLButtonDown(UINT nFlags, CPoint point) { int index; CListCtrl::OnLButtonDown(nFlags, point); int colnum; if( ( index = HitTestEx( point, &colnum )) != -1 ) { UINT flag = LVIS_FOCUSED; if( (GetItemState( index, flag ) & flag) == flag ) { // Add check for LVS_EDITLABELS if( GetWindowLong(m_hWnd, GWL_STYLE) & LVS_EDITLABELS ) { CStringList lstItems; lstItems.AddTail( "First Item"); lstItems.AddTail( "Second Item"); lstItems.AddTail( "Third Item"); lstItems.AddTail( "Fourth Item"); lstItems.AddTail( "Fifth Item"); lstItems.AddTail( "Sixth Item"); ShowInPlaceList( index, colnum, lstItems, 2 ); } } else SetItemState( index, LVIS_SELECTED | LVIS_FOCUSED , LVIS_SELECTED | LVIS_FOCUSED); } } Step 7: Subclass the CComboBox classWe need to subclass the CComboBox class to provide for our special requirement. The main requirements placed on this class is that - It should send the LVN_ENDLABELEDIT message when the user finishes selecting an item - It should destroy itself when the edit is complete - The edit should be terminated when the user presses the Escape or the Enter key or when the user selects an item or when the control loses the input focus. // InPlaceList.h : header file // ///////////////////////////////////////////////////////////////////////////// // CInPlaceList window class CInPlaceList : public CComboBox { // Construction public: CInPlaceList(int iItem, int iSubItem, CStringList *plstItems, int nSel); // Attributes public: // Operations public: // Overrides // ClassWizard generated virtual function overrides //{{AFX_VIRTUAL(CInPlaceList) public: virtual BOOL PreTranslateMessage(MSG* pMsg); //}}AFX_VIRTUAL // Implementation public: virtual ~CInPlaceList(); // Generated message map functions protected: //{{AFX_MSG(CInPlaceList) afx_msg int OnCreate(LPCREATESTRUCT lpCreateStruct); afx_msg void OnKillFocus(CWnd* pNewWnd); afx_msg void OnChar(UINT nChar, UINT nRepCnt, UINT nFlags); afx_msg void OnNcDestroy(); afx_msg void OnCloseup(); //}}AFX_MSG DECLARE_MESSAGE_MAP() private: int m_iItem; int m_iSubItem; CStringList m_lstItems; int m_nSel; BOOL m_bESC; // To indicate whether ESC key was pressed }; /////////////////////////////////////////////////////////////////////////////The listing of the implementation file now follows. The CInPlaceList constructor simply saves the values passed through its arguments and initializes m_bESC to false. The OnCreate() function creates the drop down list and initializes it with the proper values. The overridden PreTranslateMessage() is to ascertain that the escape and the enter key strokes do make it to the combobox control. The escape key and the enter key are normally pre-translated by the CDialog or the CFormView object, we therefore specifically check for these and pass it on to the combobox. The OnNcDestroy() function is the appropriate place to destroy the C++ object. The OnChar() function ends the selection if the escape or the enter key is pressed. It does this by setting focus to the list view control which force the OnKillFocus() of the combobox control to be called. For any other character, the OnChar() function lets the base class function take care of it. The OnCloseup() function is called when the user has made a selection from the drop down list. This function sets the input focus to its parent thus terminating the item selection. // InPlaceList.cpp : implementation file // #include "stdafx.h" #include "InPlaceList.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif ///////////////////////////////////////////////////////////////////////////// // CInPlaceList CInPlaceList::CInPlaceList(int iItem, int iSubItem, CStringList *plstItems, int nSel) { m_iItem = iItem; m_iSubItem = iSubItem; m_lstItems.AddTail( plstItems ); m_nSel = nSel; m_bESC = FALSE; } CInPlaceList::~CInPlaceList() { } BEGIN_MESSAGE_MAP(CInPlaceList, CComboBox) //{{AFX_MSG_MAP(CInPlaceList) ON_WM_CREATE() ON_WM_KILLFOCUS() ON_WM_CHAR() ON_WM_NCDESTROY() ON_CONTROL_REFLECT(CBN_CLOSEUP, OnCloseup) //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // CInPlaceList message handlers int CInPlaceList::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CComboBox::OnCreate(lpCreateStruct) == -1) return -1; // Set the proper font CFont* font = GetParent()->GetFont(); SetFont(font); for( POSITION pos = m_lstItems.GetHeadPosition(); pos != NULL; ) { AddString( (LPCTSTR) (m_lstItems.GetNext( pos )) ); } SetCurSel( m_nSel ); SetFocus(); return 0; } BOOL CInPlaceList::PreTranslateMessage(MSG* pMsg) { if( pMsg->message == WM_KEYDOWN ) { if(pMsg->wParam == VK_RETURN || pMsg->wParam == VK_ESCAPE ) { ::TranslateMessage(pMsg); ::DispatchMessage(pMsg); return TRUE; // DO NOT process further } } return CComboBox::PreTranslateMessage(pMsg); } void CInPlaceList::OnKillFocus(CWnd* pNewWnd) { CComboBox::OnKillFocus(pNewWnd); CString str; GetWindowText(str); // Send Notification to parent of ListView ctrl LV_DISPINFO dispinfo; dispinfo.hdr.hwndFrom = GetParent()->m_hWnd; dispinfo.hdr.idFrom = GetDlgCtrlID(); dispinfo.hdr.code = LVN_ENDLABELEDIT; dispinfo.item.mask = LVIF_TEXT; dispinfo.item.iItem = m_iItem; dispinfo.item.iSubItem = m_iSubItem; dispinfo.item.pszText = m_bESC ? NULL : LPTSTR((LPCTSTR)str); dispinfo.item.cchTextMax = str.GetLength(); GetParent()->GetParent()->SendMessage( WM_NOTIFY, GetParent()->GetDlgCtrlID(), (LPARAM)&dispinfo ); PostMessage( WM_CLOSE ); } void CInPlaceList::OnChar(UINT nChar, UINT nRepCnt, UINT nFlags) { if( nChar == VK_ESCAPE || nChar == VK_RETURN) { if( nChar == VK_ESCAPE ) m_bESC = TRUE; GetParent()->SetFocus(); return; } CComboBox::OnChar(nChar, nRepCnt, nFlags); } void CInPlaceList::OnNcDestroy() { CComboBox::OnNcDestroy(); delete this; } void CInPlaceList::OnCloseup() { GetParent()->SetFocus(); } Where can I download fully code of this topic?Posted by lwk on 09/19/2010 05:28am This article is very useful for me, How can I get all code? Using a drop down list to change a subitemPosted by KOOK Yan on 09/25/2012 02:16am I need it,thank you! listctrl and comboxPosted by ujsmk on 10/16/2012 08:07pm Thank you for help,I need a copy of this reference .Thanks!Reply Plz. attach SamplePosted by Legacy on 02/09/2004 12:00am Originally posted by: NiceGuy Can anyone please attach the sample project. I really need it!!. Thanks, Reply Problems with PropertySheetPosted by Legacy on 10/22/2003 12:00am. hope you are doing the followingPosted by vijaymohite on 08/26/2004 11:44am Thank youPosted by Legacy on 08/18/2003 12:00am Originally posted by: Ruediger Schindler great article! Easy to understand and to implement. It helped me very much. BrgdsReply Ruediger Tabbing Between CellsPosted by Legacy on 06/18/2003 12:00am Great Article!Posted by Legacy on 05/15/2003 12:00am Originally posted by: Ryan Steckler Thank you for writing this. You saved a bunch of my time. It ported well over to the PocketPC as well. -rReply But the Combobox didn't show,why?Posted by Legacy on 04/24/2003 12:00am Originally posted by: Tina Chu I did as the above,but I found the combobox did't show though the related functions was executed.who know whyReply List box clippedPosted by Legacy on 02/03/2003 12:00am Try this articlePosted by jjwalters on 09/22/2004 04:16am OnChar in dropdown comboPosted by Legacy on 12/14/2002 12:00am Originally posted by: Stefan I need to get into the OnChar method. But i never get there when i use a dropdown instead of a dropdown list. What to do?Reply SOMEONE COULD HELP PLEASE?Posted by Legacy on 09/30/2002 12:00am Originally posted by: Michael C. I have been searching some information about listctrl and how could include combobox in its cells. Now i find this article but i dont know execute.Reply Could someone pass me complete code? Thanks.
http://www.codeguru.com/cpp/controls/listview/editingitemsandsubitem/article.php/c979/Using-a-drop-down-list-to-change-a-subitem.htm
CC-MAIN-2014-52
refinedweb
1,722
53.71
Im a bit new to programming and im stuck with a few questions I have to do. In the questions I have to create a program which turns change (money) into "pounds:" and "pence:". How would I do this with a single double such as 154p = Pound: 1 Pence: 54. I also have to do the same with converting feet & Inches into Meters: and Cm:. i.e. 5 foot 11 inches = 1m 80cm. I believe there is a simple way of doing this, I just cant figure it out. This is what I have so far for feet and inches program. package measure; import java.util.Scanner; public class Measure { static Scanner input = new Scanner(System.in); public static void main(String[] args) { System.out.println("Enter Feet: "); int feet = input.nextInt(); System.out.println("Enter Inches: "); int inches = input.nextInt(); double result = ((feet * 12) + inches); double cm = result*2.54; System.out.println((int)cm); You just need to convert the cm you obtained to m. You need to us the division operator / and the remainder operator % As an example: 38 = 9 x 4 + 2 38 / 9 = 4 and 38 % 9 = 2 For more information about these, take a look at this link: Therefore, adding this code to the end will do the work: int meter = (int)cm / 100; int remainingCM = (int)(cm-meter*100); System.out.println((int)meter + "m and " + remainingCM + " cm!" );
https://codedump.io/share/JisObIqr91YD/1/spliting-a-doubleint-into-2-different-values
CC-MAIN-2017-09
refinedweb
235
66.84
Here are two screenshots from Stellaris where images are seamlessly integrated into the text. How can I achieve this in unity? 100% Private Proxies – Fast, Anonymous, Quality, Unlimited USA Private Proxy! Get your private proxies now! Here are two screenshots from Stellaris where images are seamlessly integrated into the text. How can I achieve this in unity? I can only see one in unity: But there are three in blender: I thank thee answers. I have few columns which has values either 1 or 0. Final result i have to store in one column(F), Which should perform OR operation. I had asked this question on Software Recommendation but did not get any answer. I am working on an e-commerce website, which is developed in ASP.NET MVC… I want to build a blog for the website and add some content to the blog to improve website’s ranking in the Search result. I want my blog to be reached using: my-site.com/blog Is it possible to integrate an existing blog platform such as Medium or Blogger into my website? I would like to pull 3 attributes (Open, Close, Volume) for a variety of stock symbols. I would like to pull the stock data within a specific range and export it to an excel spreadsheet. I would also like to append the next stock ticker and its attributes to the bottom of the former. So far I have the following code: filename = "Data.xls"; data = FinancialData[ "GE", {"Open", "Close"}, {{2000, 1, 1}, {2021, 1, 1}}]; V = data // Normal; Export[filename, V]; Several problems are easy to see: Data for Open and Close are saving in separate tabs (I do not know why), adding additional ticker symbols forces all ticker data for a symbol into the same row in the same tab (again I do not know why), I am getting a string e.g. Quantity[50.29999923706055, "USDollars"] when all I want is the numerical piece. Finally, I am not getting the headers for each attribute as I am looking for. I would really appreciate some assistance here. I have looked at other solutions as Creating a Stock Dataset but it doesn’t quite give me what I am looking for and my attempts to change it to meet my needs have failed. Some spells can push the target X feet away (like thunderwave, which pushes 10 feet away on a failed save). I was wondering what happens if the target, while being pushed, encounters a wall or other rigid object. Does it take (bludgeoning) damage? To me it seems logical that it would: it is like falling, sudden force applied to a creature due to encountering resistance from an object. I couldn’t find any rules about this in the Player’s Handbook nor online. I am displaying multiple images in listview from database and is working fine, the problem am having is when ever I wan’t to download multiple selected images into any folder on my desktop computer, only one image is downloaded not multiple selected images : check my codes Code for retrieving rom database::: listView1.View = View.LargeIcon; listView1.LargeImageList = largeImage; { connect.Open(); SqlCommand cmd = new SqlCommand("SELECT name,data FROM gallery", connect); SqlDataReader reader = cmd.ExecuteReader(); listView1.Items.Clear(); largeImage.Images.Clear(); while (reader.Read()) { if (!reader.IsDBNull(1)) { Bitmap bm = BytesToImage((byte[])reader.GetValue(1)); float source_aspect = bm.Width / (float)bm.Height; AddImageToImageList(largeImage, bm, reader[0].ToString(), largeImage.ImageSize.Width, largeImage.ImageSize.Height); } listView1.AddRow(reader[0].ToString(), reader[0].ToString()); } connect.Close(); } Code for downloading (My Problem) FolderBrowserDialog folderBrowserDialog = new FolderBrowserDialog(); if (folderBrowserDialog.ShowDialog() == DialogResult.OK) { if (listView1.Items.Count > 0) { listView1.FocusedItem = listView1.Items[0]; listView1.Items[0].Selected = true; listView1.Select(); SqlCommand cmd = new SqlCommand("SELECT Name,Data FROM ", connect); cmd.CommandType = CommandType.Text; connect.Open(); SqlDataReader sdr = cmd.ExecuteReader(); if (sdr.Read()) { byte[] bytes = (byte[]) sdr["Data"]; string fileName = sdr["Name"].ToString(); string path = Path.Combine(folderBrowserDialog.SelectedPath, fileName); File.WriteAllBytes(path, bytes); } connect.Close(); } } what i have tried if (sdr.HasRows) { while (sdr.Read()) { byte[] bytes = (byte[]) sdr["Data"]; string fileName = sdr["Name"].ToString(); string path = Path.Combine(folderBrowserDialog.SelectedPath, fileName); File.WriteAllBytes(path, bytes); } } for (int I=0; I<listView1.Items.Count; I++) { listView1.FocusedItem = listView1.Items[I]; listView1.Items[I].Selected = true; } Exactly? I need to put the result of a query into a variable. Just a query, works as successful DECLARE @count INT = ( SELECT count (*) FROM [AdventureWorks].[Person].[Address] ); select @count; But if I need to use the WITH statement in a query, then I get a syntax error DECLARE @count INT = ( WITH person_address (id) as ( SELECT AddressID FROM [AdventureWorks].[Person].[Address] ) SELECT count (*) FROM person_address ); select @count; Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword ‘WITH’. Msg 319, Level 15, State 1, Line 2 Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. Msg 102, Level 15, State 1, Line 9 Incorrect syntax near ‘)’. How do I put the query value into a variable if the WITH clause is used in the SQL statement? The Homonculus Servant infusion requires a specific component: Item: A gem or crystal worth at least 100 gp Later, it goes on to say The item you infuse serves as the creature’s heart, around which the creature’s body instantly forms. Okay, while gemstones are assumed to be just valuables, what is a crystal? The PHB lists the following forms for an arcane focus: Arcane Focus: Crystal, Orb, Rod, Staff, Wand Admittedly, the crystal has a listed cost of 10gp, but it’s reasonable a fancier one could be made to fulfill the requirement. Moving on, the Artificer’s Spell-Storing Item feature lists as its possible targets one simple or martial weapon or one item that you can use as a spellcasting focus, An artificer must use a tool they are proficient with or an item they have infused as a spellcasting focus – this means that a finished homonculus can be turned into a spell-storing item, but it’s rather unclear if it can then touch itself to cast the spells stored within itself. … but, if you pick up one level of wizard (or the magic initiate: wizard feat, perhaps? Actually, no, that deserves a question of its own), you can use the crystal as a spellcasting focus, since the feature does not specify it must be an artificer focus. Thus the plan. At the end of a long rest, touch your expensive crystal focus, imbue it with your desired spell, then infuse the focus to be the heart of your homonculus, which it is rather difficult to argue isn’t touching its own heart at all times, letting it cast the spells within the item even by the strictest of RAW readings. Is there a reason this wouldn’t work?
https://proxieslive.com/tag/into/
CC-MAIN-2021-21
refinedweb
1,165
64.91
The XmlWriter class writes XML data to a stream, file, TextReader, or string. The XmlWriter class writes XML data to a stream, file, TextReader, or string. It provides a means of creating well-formed XML data in a forward-only, write-only, non-cached manner. The XmlWriter class supports the W3C XML 1.0 and Namespaces in XML recommendations. This section discusses how to create an XmlWriter instance with a specified set of features, data conformance checking, writing typed data, and so on. Discusses how to create writers using the XmlWriter..::.Create method. Describes data conformance checks that can be set on the XmlWriter class. Discusses the namespace features on the XmlWriter class. Discusses how to write typed data. Describes the methods available for writing attributes. Describes the methods available for writing elements. Discusses how to use the XmlTextWriter class. In the Microsoft .NET Framework version 2.0, we recommend creating XmlWriter objects using the Create method. Provides an overview to a comprehensive and integrated set of classes that work with XML documents and data in the .NET Framework.
http://msdn.microsoft.com/en-us/library/tx3wa6ka.aspx
crawl-002
refinedweb
179
61.22
Escape sequences Escape sequences are used to represent certain special characters within string literals and character literals. The following escape sequences are available, extra escape sequences may be provided with implementation-defined semantics: [edit] Notes Among all octal escape sequences, \0 is most useful, for it represents the terminating null character in null-terminated strings. or a 16-bit string literal may map to more than one character, e.g. \U0001f34c is 4 char code units in UTF-8 (\xF0\x9F\x8D\x8C) and 2 char16_t code units in UTF-16 (\uD83C\uDF4C)) The question mark escape sequence \? is used to prevent trigraphs from being interpreted inside string literals: a string such as "??/" is compiled as "\", but if the second question mark is escaped, as in "?\?/", it becomes "??/" [edit] Example #include <cstdio> int main() { std::printf("This\nis\na\ntest\n\nShe said, \"How are you?\"\n"); } Output: This is a test She said, "How are you?"
http://en.cppreference.com/w/cpp/language/escape
CC-MAIN-2015-22
refinedweb
157
53.81
.NET Framework Class Library in Visual Studio The .NET Framework class library is comprised of namespaces. Each namespace contains types that you can use in your program: classes, structures, enumerations, delegates, and interfaces. When you create a Visual Basic or Visual C# project in Visual Studio, the most common base class DLLs (assemblies) are already referenced. However, if you need to use a type that is in a DLL not already referenced, you will need to add a reference to the DLL. For more information, see Adding and Removing References. The topics below provide the following information: Lists of the most important namespaces for each feature area. Links to reference topics in the .NET Framework about each major namespace. Links to procedural and conceptual topics about how to use those namespaces in your Visual Basic .NET and Visual C# .NET allow .NET applications.
https://msdn.microsoft.com/en-US/library/f1yh62ef(v=vs.80).aspx
CC-MAIN-2016-44
refinedweb
143
58.28
0 Hey everyone. I am just learning assembly and I am understanding everything, but I keep having a problem a with this program. The main is in C and is supposed to receive a string from a user. Then, in assembly, I am supposed to count the number of words. But whenever I run the program, it crashes. Obviously there's something wrong. Here is my program: #include <stdio.h> #include <stdlib.h> extern int countwords(char string); int main(int argc, char *argv[]) { char mystring[256]; int count; printf("Give me a string: "); scanf("%s", mystring); count = countwords(mystring); printf("Your string has %d letters \n", count); system("PAUSE"); return 0; } .global _countwords _countwords: pushl %ebp movl %esp, %ebp movl 8(%ebp), %ebx xor %esi, %esi xor %eax, %eax movl 0x20, %ecx cmpb $0, (%ebx, %esi, 1) je done loop: cmp %ecx, (%ebx, %esi, 1) je word cmpb $0, (%ebx, %esi, 1) je done inc %esi jmp loop word: inc %eax inc %esi cmpb $0, (%ebx, %esi, 1) je done jmp loop done: movl %ebp, %esp popl %ebp ret I was wondering if any of you more experienced programmers can help an amateur out? Any assistance would be greatly appreciated! Thanks
https://www.daniweb.com/programming/software-development/threads/351893/counting-words-in-a-string
CC-MAIN-2016-50
refinedweb
202
70.84
Having Bluetooth connectivity and enabling Raspberry Pi to behave as A2DP source is nothing new (see a general tutorial on Instructables), but the problem I had thus far was making that work on my media center Pi which was running Raspbmc. As I found out through numerous attempts to make it work, the problem was with Raspbmc suffering a bit too much from "weight loss" it forced upon itself, so the modules that work as expected on Raspbian were failing on Raspbmc: $ pactl list sources short Assertion 'l' failed at pulsecore/flist.c:168, function pa_flist_pop(). Aborting.Aborted When I learned that Raspbmc was not going to be developed any longer and that its successor -- OSMC -- will be based on Debian distro that was not butchered to fit on 2 GB SD card, I immediately planned to give that a try :-) What I found out when I started setting up A2DP on OSMC on my Raspberry Pi 2 is that OSMC is no longer based (as its predecessor was and as Raspbian is) on Debian Wheezy, but that it is based on Debian Jessie. That means bye-bye SysVinit and hello Systemd. And that change of service manager (making all "A2DP on Rasberry Pi" available tutorials obsolete or at least not easily replicable by novice users) is the real reason for this tutorial. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Requirements - Raspberry Pi - 4GB+ SD card with installed OSMC (Release Candidate or newer) - Bluetooth v4.0+ USB adapter - a headphones or speaker system connected to 3.5 mm phones port - an Internet connection (for package updates) - either LAN connection for SSH or an USB keyboard A note regarding the Bluetooth v4.0+ USB adapter: Even though I've used IOGEAR GBU521, any similar adapter should sufice, but best check the verified/supported hardware list before buying! Step 2: Log in to a Terminal Boot the Pi. 1) Variant: Remote terminal (SSH) If you've a LAN connection to the Pi, the easiest way to go through this tutorial is to use SSH -- they you can copy/paste commands instead of typing them :-) Find what your Pis IP address is and SSH into it with your favorite telnet client. 2) Variant: Local terminal In this case, connect your keyboard, go to Power menu in XBMC and choose Exit. XMBC will close in a few seconds, but will restart immediately again. To enter the terminal, make sure you press the ESC key after you've choosen the Exit and while the OSMC screen is still showing. That will stop XBMC from staring again ad you'll have a terminal prompt with login in front of you. Either way, the default credentials for OSMC are: Username: osmc Password: osmc Step 3: Install the Necessary Packages To get the state of repositories up-to-date, first execute: sudo apt-get update Optional: if you want, you can upgrade the outdated packages with: sudo apt-get upgrade Note: OSMC uses its own apt-get repository () for updating the system, which is a welcome change from Raspbmc, so not only does 'upgrade' upgrades the linux distro and installed 3rd party packages, it also upgrades the OSMC (media center and its packages). After it finishes, execute this to install the necessary packages: sudo apt-get install alsa-utils bluez bluez-tools pulseaudio-module-bluetooth python-gobject python-gobject-2 Image courtesy of: Keep Calm Network Ltd. Step 4: Audio Configuration Enable and load the sound module: echo 'snd_bcm2835' | sudo tee -a /etc/modules echo 'snd-bcm2835' | sudo tee -a /etc/modules sudo modprobe snd_bcm2835 snd-bcm2835 Add users to pulse audio groups: sudo usermod -a -G lp osmc sudo usermod -a -G pulse-access,audio root sudo adduser osmc pulse-access Heads-up: from this point forward, the tutorial uses GNU nano as terminal text editor, but you can easily use the one you prefer. Open the configuration file for the PulseAudio daemon: sudo nano /etc/pulse/daemon.conf And find (press Ctrl+W, then type 'float') the line that reads: resample-method = speex-float-1 and change it so: ; resample-method = speex-float-1 resample-method = trivial Because we'll be running PulseAudio deamon in system mode, loading additional modules will not be possible once the deamon is started, so we need to configure their inclusion in the PulseAudio startup script for system mode: sudo nano /etc/pulse/system.pa and add the following lines (which check for existance and enable various modules): We also need to create the service starting script: sudo nano /etc/systemd/system/pulseaudio.service and edit it like so: [Unit] Description=Pulse Audio [Service] Type=simple ExecStart=/usr/bin/pulseaudio --system --disallow-exit --disallow-module-loading --disable-shm --daemonize [Install] WantedBy=multi-user.target Execute these two commands to scan for new/changed units and to restart the PulseAudio service: sudo systemctl daemon-reload sudo systemctl enable pulseaudio.service sudo systemctl start pulseaudio.service All that is left to do for audio is to enable output to a prefered connector (0=auto,1=headphones,2=hdmi): amixer cset numid=3 1 and, as volume will be controlled by the client, to set the volume level on the server side to 100 percent: amixer set Master 100% pacmd set-sink-volume 0 65535 Step 5: Bluetooth Configuration To make sure A2DP audio sinks are allowed, open the config file: sudo nano /etc/bluetooth/audio.conf and place these three lines under [General] heading: [General] Enable=Source,Sink,Media,Socket HFP=true Class=0x20041C Next, open another config file: sudo nano /etc/bluetooth/main.conf and place these lines under [General] heading: [General] Name = osmc Class = 0x20041C Next, list the BT adapter config: sudo hciconfig -a and note the MAC address (in form of XX:XX:XX:XX:XX:XX). You will need to specify it here: sudo nano /var/lib/bluetooth/XX:XX:XX:XX:XX:XX/settings where you should paste these lines: [General] Discoverable=true Alias=osmc Class=0x20041C Bring up the BT: connmanctl enable bluetooth sudo hciconfig hci0 up and in the BT control shell: bluetoothctl Then, execute the following commands to enable the agent and start the scan: agent on default-agent discoverable on scan on That should display a list of visible BT clients (if not, execute 'devices' command). Next, execute the following to pair/trust your client device(s): pair XX:XX:XX:XX:XX:XX Request confirmation [agent] Confirm passkey 503116 (yes/no): yes trust XX:XX:XX:XX:XX:XX Note: repeat the pair & trust commands for each device you want to allow to connect. With this the Bluetooth part of the configuration is done. Step 6: Fix for Automatic Audio After Reboot When the past steps in the tutorial are all done, we can immediately see the effect -- the client and the Pi are connected and the client can stream audio via BT. Woohoo! That's it!...Or, is it? The problem comes when the Pi is restarted. Then, until someone is logged into console, PulseAudio will not work. One solution is to enable auto-login into console. sudo cp /lib/systemd/system/getty@.service /etc/systemd/system/autologin@.service Next, we symlink that autologin@.service to the getty service for the tty on which we want to autologin, for example for tty1: sudo mkdir /etc/systemd/system/getty.target.wants sudo ln -s /etc/systemd/system/autologin@.service /etc/systemd/system/getty.target.wants/getty@tty1.service Up to now, this is still the same as the usual getty@.service file, but the most important part is to modify the autologin@.service to actually log you in automatically. To do that, you only need to change the ExecStart line to read: ExecStart=-/sbin/agetty -a osmc %I 38400 That's it. After this you'll briefly see the login prompt when the OSMC boots up, but it will allow PulseAudio to run even without you logged in. reboot Source: Step 7: Conclusion and Tips My main goal was to reuse the always-on devices (media centar Raspberry Pi and Android phones/tablets) to stream online subscription music (namely Google Play Music) without needing to have PC turned on or the client device jacked in. Naturally, the setup can be used to stream all sorts of audio through Pi. For example, gaming on tablet sounds pretty sweet when played on home stereo :) Tip: - What I encountered occasionally on the phone was that sometimes it doesn't automatically reconnect to the Pi and that, when Pi (osmc) is pressed in BT phone settings, it immediately disconnects. The solution for this seems to be to turn BT on the phone off and then back on. After that (BT "reset") the phone again automatically connects to the Pi. Nota bene: - I was writing down each step I applied while I was investigating how goal in this tutorial can be achieved, but when I started to verify the tutorial (on a clean OSMC install) some weeks later, the version of the OSMC was incremented and I didn't manage to reproduce it exactly step by step, so if you find some parts are off, feel free to suggest improvements in the comments (and I will hopefully update the tutorial). Principal source of information: ArchLinux Wiki. Honorable mention: inspired by a tutorial from dantheman_213. 85 Discussions 4 years ago on Introduction Hello, I'd like to thank you for this guide. It is really straight forward and it worked great for me. Now I want to share something I added, which will allow your friends to be able to pair and play music too, without having to configure anything on your PI. First you have to download the bluez source code and untar it. Then you have to modify bluez/test/simple-agent file replacing the following section: nano /usr/src/bluez-5.23/test/simple-agent def RequestPinCode(self, device): print("RequestPinCode (%s)" % (device)) set_trusted(device) return “1234" Then I created a script to switch to discoverable mode, disable secure simple pairing and finally start this agent. nano /usr/bin/btscript.sh #!/bin/sh result=`ps aux | grep -i "simple-agent" | grep -v "grep" | wc -l` if [ $result -ge 0 ]; then sudo hciconfig hci0 piscan sudo hciconfig hci0 sspmode 0 sudo /usr/bin/python /usr/src/bluez-5.23/test/simple-agent & else echo "BT Agent already started" fi Finally to start the script on autologon: chmod +x /usr/bin/btscript.sh/btscript.sh exit 0 Now just reboot and that’s it! You will be able to pair and play music from any bluetooth devices using the code PIN you defined in your agent (here “1234”). Source: Reply 3 years ago Hello, I have tried to do this, but I can't find /usr/src/bluez-5.23/test/simple-agent Reply 3 years ago @anmorfe: possibly you skipped this step: So, something like this should work for you: wget tar xf bluez-5.35.tar.xz cd bluez-5.35 Reply 3 years ago I had somewhat mixed feelings about mixing versions, so preferred to download the sources of the version that was also installed via apt-get: 1. If the line "deb-src ..." starts with a # (is a comment) remove the #. If it's not commented out, continue with 3 2. sudo apt-get update 3. cd /usr/src 4. sudo apt-get source bluez This was the only missing part of pubbb's addition. Thank you Robert and pubbb for your description, it's literally the only one out there for jessie-based distros. Reply 3 years ago Forgot to mention: Step 0: Open apt sources file: sudo nano /etc/apt/sources.list Reply 4 years ago on Introduction Nice enhancement :) Thanks! 4 years ago on Introduction Thank you for sharing this informations. And made it and ... it works!!! As I use a Hifi Digi+ sound card, I did not make the first 2 parts of step 4, also did not type the command # amixer cset numid=3 1 # amixer set Master 100% # pacmd set-sink-volume 0 6553 Also I do not need the 6th step (OSMC RC2). Thank you again. Reply 4 years ago on Introduction Thanks for confirming :) Much appreciated! 3 years ago Okay, I've tried my best to blend together both the instructions above and the suggestions below and still cannot get any audio output. I'm using a R Pi 1 Model B while I wait for a dedicated model 3 to arrive with the OSCM July 1st image. I'm starting to wonder if something has changed in the bluetooth architecture because the folder '/etc/bluetooth' is not there and the file 'main.conf' does not exist either. I went ahead and did a 'mkdir /etc/bluetooth' and created the main.conf file with the contents suggested but that doesn't seem to have fixed the issue. I've also tried changing the Audio Out settings in OSMC and none of them will produce the bluetooth audio either... Does anyone have any ideas? Reply 2 years ago Hi together, I have exact the same problem with my Pi 1 Model B. I use a fresh OSMC installation and I followed the instructions step by step, till I have to add something to the folder /etc/bluethooth. This folder does not exist. Is this only a problem of Pi 1? Torsten 2 years ago Hello Robert, Thanks for the great instructions. Went through them step by step and everything works! Now.. I'm interested in another addition which I think might be very much welcome by other users as well - have the BT connect event trigger an HDMI CEC message to the connected AVR / TV and get it to turn on... My existing RPi2 + OSMC does exactly that when turned on (including AVR input change) but when I connect to its BT from a client and stream audio it doesn't.. Any thoughts on how to achieve that? Thanks again! O, Reply 2 years ago Thanks. No idea how to do that, but seems it could be interresting for others, too. Maybe when you succeed you can post a followup. 3 years ago Hi, I tried to install my bluetooth adapter according to the intstruction but failed. Now my oscm installation is not starting anymore. I have still access to console. hope someone can help me in checking the logs below what is happening, thus I am not that experienced with unix... greets ingo 3 years ago what FANTASTIC work! well done, sir. I hope this gets pulled upstream into OSMC or OpenELEC for native support 3 years ago I have installed but sound from my bluetooth speaker last only for 10 sec then no sound. My speakers still connected but no sound after 5-10 sec. How do I get continues sound? Thanks 3 years ago I have successfully installed on my OSMC and I can hear the sound from my Bluetooth speaker but it is only for 10 sec and then no sound. Also, I noticed that when I play You tube video, I can hear the sound with video. Any idea? 3 years ago I thought that enabling Raspberry Pi as A2DP source means to send data to be played on an A2DP sink such a bluetooth headset or speakers. But step 1 says I must connect headphones or speakers to the 3.5mm port. What is this tutorial for? 3 years ago Thank you very much for your guide. It work great for me, I just need to set pulse audio in OSMC audio parameter instead of HDMI. Thank you very much one more time. Bye. 4 years ago on Introduction Thanks for the tutorial! For me it worked when reverting from RC3 to RC, but after restarting the device(pi) is not detected as a ad2p speaker but rather a normal Bluetooth device(with no label). When tryign to connect to it the connection would drop after less than a second. I believe it may be the ad2p configs are not taking effect, but am unsure as I followed the guide exactly. Reply 3 years ago Hi, I am having the same problem. It used to work fine for a couple of days, then when the pi rebooted, my bluetooth device is now Class 0x00041C and cannot be connected to anymore. The file /var/lib/bluetooth/XX:XX:XX:XX:XX/settings seems to get overwritten at boottime where the line Class = 0x20041C gets erased. I can manually set the device class back to 0x20041C using "hciconfig hci0 class 0x20041C" but I have to do this after every reboot. Any suggestion on where to fix this?
https://www.instructables.com/id/Enhance-your-Raspberry-Pi-media-center-with-Blueto/
CC-MAIN-2019-43
refinedweb
2,803
59.94
Calling Native Windows DLL API with JNA - Include the JNA library in your project - Create an interface that extents com.sun.jna.Library with the prototypes of the functions your need to call - Copy the DLL file to your project. - Load the library (in this case dll) with com.sun.jna.Native - Call the native functions with the interface create in 2. Include JNA in your project You can add the jna.jar download from their official site in your project directly, or using gradle to add the dependencies like this. . . dependencies { ... compile group: 'net.java.dev.jna', name: 'jna', version: '5.6.0' } . . Create an Interface, Load it, and use it These code is shamelessly copy from with little modification. Note that the function prototypes need to MATCH with the one defined in the DLL file! We load kernel32, which is the file name of the dll in your project. import com.sun.jna.Library; import com.sun.jna.Native; public class TestJNA { public interface Kernel32 extends Library { boolean Beep(int FREQUENCY, int DURATION); void Sleep(int DURATION); } public static void main(String[] args) { Kernel32 lib = (Kernel32) Native.load("kernel32", Kernel32.class); lib.Beep(698, 500); lib.Sleep(500); lib.Beep(698, 500); } } Additional Information Details of Kernel32 can be found in MSDN. We use Beep, and Sleep functions here. Note that we match out interfaces methods to those functions we called. Noted that Java, and C++ Type are different, and here are the mapping. JNA also support Linux platform. BOOL Beep( DWORD dwFreq, DWORD dwDuration ); void Sleep( DWORD dwMilliseconds );
https://wiki.chongtin.com/java/calling_native_windows_dll_api_with_jna
CC-MAIN-2021-17
refinedweb
261
59.7
44395/transfer-files-from-amazon-untrusted-server-intermediary I have two servers that have an encrypted line of communication between them. For one server I do not trust with any aws credentials so normally it cannot access my files. Is there a way to get a file from S3 to the untrusted server? I don't want to use CDN or make my files public since I don't want anyone but the two servers to access them. So any suggestions? There is one way. If the other services you have is trusted one then You can generate pre-signed URLs on the trusted servers and send those to the untrusted server that can then use those URLs to safely download the file. The URLs don't require the untrusted server to hold any keys. They also have a limited time-to-live so you can limit your exposure if those leak for some reason. This way you can allow the untrusted server to access only the files you want for the period of time you want Something like aws s3 presign s3://mybucket/myfile --expires-in 60 This article pretty much explains the entire ...READ MORE Hey, 3 ways you can do this: To ...READ MORE Boto3 is the library to use for ...READ MORE You can take a look at not a error but more ...READ MORE The code would be something like this: import ...READ MORE Hai, I was thinking along the same ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/44395/transfer-files-from-amazon-untrusted-server-intermediary
CC-MAIN-2020-16
refinedweb
255
73.27
Asked by: VS2012 SQL Server 2012 CustomReportItem unable to choose from dll I have a custom report item which works fine with VS2010 and MSSQL2012. But, when i created new report project (Business Intelligence Wizard), im unable to add my custom report item to toolbox. I, as usually, selected Choose Items, selected my dll, and in the grid i dont see my report item. My attributes for CustomReportItem are: [CustomReportItem("MyName"), DefaultProperty("CodeText"), LocalizedName("MyCompany MyName"), Editor(typeof (CustomEditor), typeof (ComponentEditor)), Description("MyName description")] [ToolboxBitmap(typeof (MyName ReportItemDesigner), "MyName Designer.bmp")] [System.CLSCompliant(false)] public class MyNameReportItemDesigner : CustomReportItemDesigner, IReportPropertyProvide I just replaced some text with NAME wording. So why my report item is absent ? Maybe need another attributes in VS2012? ---- Need to say this custom report item already works for SQL2000 / SQL2005 / SQL2008 / SQL2008R2 / SQL2012 (VS2010). So its a multiversion project (for sure i use some #if statements when required). And this custom report item works fine in VS2010 and MSSQL2012 Reporting Services. But only in VS2012 i'm unable to add report item to toolbox from my dll (any other VSs versions allow). Moreover i even can open VS2010 test report project in VS2012, and able to see my report item (inside report) and even it works fine. But unable to add my item to toolbox. So its seems like a problem with toolbox in VS2012. Or (for VS2012) i need to set more / another attributes for my report designer class ? Question All replies - BTW, i didnt find any sample for SQL2012 at MS's site. All samples are for the prior versions, but nothing for SQL2012. I have nowhere to look for samples. - Edited by Alexander11111 Friday, September 27, 2013 10:01 AM Hi Alexander11111, Thank you for your question. I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. Thank you for your understanding and support. Regards, Mike Yin TechNet Community Support Hello Alexander, One of the primary requirement for adding the Custom Report Item designer assemblies in the VS tool box is to have them compiled in .NET framework 4.0. But the Custom Report Item runtime should still be under .NET Framework 3.5. If issue still persists, then this may require a more in-depth level of support for which this may fall into a paid support category.. Regards Durai Murugan
https://social.msdn.microsoft.com/Forums/en-US/9796efb1-433c-4dd6-ba64-8962ba67754f/vs2012-sql-server-2012-customreportitem-unable-to-choose-from-dll
CC-MAIN-2015-40
refinedweb
409
57.67
Summary In this chapter, we first reviewed the way DNS works. We talked about the DNS namespace and the process a DNS server goes through to resolve hostnames. We then reviewed the difference between forward- and reverse-lookup zones. Then we discussed implementing master and slave DNS servers to increase fault tolerance. We finally implemented BIND on our SLES 9 server and reviewed the various files that are used to configure the named services, including named.conf, root.hint, and zone files. We then shifted gears and looked at some more advanced DNS configuration issues. First we looked at using forwarding to speed up the response time of your DNS server. Next, we looked at a variety of DNS security issues. We reviewed how to run named in a chroot jail as a nonroot user. We also talked about restricting access to the DNS server using DNS ACLs. Finally, you learned how to integrate your DNS server configuration into an LDAP directory service. We're ready now to move on to Chapter 4, where you will learn how to implement a DCHP server on SLES 9. However, we're not through with DNS yet. In Chapter 4, you will learn how to tie your DNS service and your DHCP service together so that your DNS database is automatically updated whenever your DHCP server assigns an IP address.
https://www.informit.com/articles/article.aspx?p=413711&seqNum=6
CC-MAIN-2020-24
refinedweb
227
64.91
08 February 2013 19:14 [Source: ICIS news] HOUSTON (ICIS)--WBI Energy and Calumet have agreed to a joint venture to build and operate a refinery in southwestern ?xml:namespace> Shares of Calumet were at $34.55 midday on Friday, up 3.85%. The joint venture will be called Dakota Prairie Refining, the companies said. Construction could begin this spring on the facility, which the companies said will process 20,000 bbl/day of Bakken crude oil when fully operational. Construction is expected to take up to 20 months. The plant, to be located on a 318-acre (129ha) site west of WBI Energy, an energy producer as well as pipeline and energy services company, is a subsidiary of MDU Resources Group, whose headquarters are in Calumet,
http://www.icis.com/Articles/2013/02/08/9639552/us-calumet-wbi-energy-team-up-to-build-midwest-refinery.html
CC-MAIN-2015-22
refinedweb
127
62.68
Quoting Greg KH (gregkh@suse.de):> On Tue, Apr 29, 2008 at 02:34:17PM -0500, Serge E. Hallyn wrote:> > Finally, to give an idea about how the trees end up looking, here is> > what I just did on my test box;> > > > /usr/sbin/ip link add type veth> > mount --bind /mnt /mnt> > mkdir /mnt/sys> > mount --make-shared /mnt> > ns_exec -cmn /bin/sh # unshare netns and mounts ns> > # At this point, I still see eth0 and friends under /sys/class/net etc> > mount -t sysfs none /sys> > # At this point, /sys/class/net has only lo0 and sit0, and> > # /sys/devices/pci0000:00/0000:00:03.0/net:eth0 is a dead link> > mount --bind /sys /mnt/sys> > echo $$> > 3050> > > > (back in another shell):> > /usr/sbin/ip link set veth1 netns 3050> > > > (back in container shell):> > /usr/sbin/ip link set veth1 name eth0> > # Now /sys/devices/pci0000:00/0000:00:03.0/net:eth0 is a live link to> > # the /sys/class/net/eth0 which is really the original veth1> > exit> > > > ls /mnt/sys/class/net> > # empty directory> > What does this all look like without CONFIG_SYSFS_DEPRECATED enabled,> which is what all sane distros do these days. That's going to change> the look of the tree for stuff like this a lot I think...> > thanks,> > greg k-hNow before moving veth1 to the new netns, we have in the container:/sys/class/net:lo sit0/sys/devices/virtual/net:lo sit0and after moving veth1, we have in the container:/sys/class/net:lo sit0 veth1/sys/devices/virtual/net:lo sit0In the parent network namespace, veth1 is removed from /sys/class/netbut remains in /sys/devices/virtual/net.I'm not sure whether this is the renaming bug that Daniel Lezcano'spatch addresses. If not (as I suspect) then that clearly needs to befixed.Benjamin can you play around with this and test it with Daniel'spatch?thanks,-serge
https://lkml.org/lkml/2008/5/1/158
CC-MAIN-2018-30
refinedweb
321
63.32
Visual programming and Beans Posted on March 1st, 2001 and Beans So far in this book you’ve’re putting together an application, what you really want is components that do exactly what you need. You’d like to drop these parts into your design like the electronic engineer puts together chips on a circuit board (or even, in the case of Java, onto a Web page). It seems, too, that there should be some way to accelerate this “modular assembly” style of programming. “Visual programming” first became successful – very successful – with Microsoft’s Visual Basic (VB), followed by a second-generation design in Borland’s Delphi (the primary inspiration for the Java Beans pallet what color it is, what text is on it, what database it’s’re probably used to the idea that an object is more than characteristics; it’s also a set of behaviors. At design-time, the behaviors of a visual component are partially represented by events, meaning “Here’s something that can happen to the component.” Ordinarily, you decide what you want to happen when an event occurs by tying code to that event. Here’s the critical part: the application builder tool is able to dynamically interrogate (using reflection) the component to find out which properties and events the component supports. Once it knows what they are, it can display the properties and allow you to change those that’s – certainly the user interface, but often other portions of the application as well. What is a Bean? After the dust settles, then, a component is really just a block of code, typically embodied in a class. The key issue is the ability for the application builder tool to discover the properties and events for that component. To create a VB component, the programmer had to write a fairly complicated piece of code following certain conventions to expose the properties and events. Delphi was a second-generation visual programming tool and the language was actively designed around visual programming so it is much easier to create a visual component. However, Java has brought the creation of visual components to its most advanced state with Java Beans, because a Bean is just a class. You don’t have to write any extra code or use special language extensions in order to make something a Bean. The only thing you need to do, in fact, is slightly modify the way that you name your methods. It is the method name that tells the application builder tool whether this is a property, an event, or just an ordinary method. In the Java documentation, this naming convention is mistakenly termed a “design pattern.” This is unfortunate since design patterns (see Chapter 16) are challenging enough without this sort of confusion. It’s not a design pattern, it’s just a naming convention and it’s fairly simple: - For a property named xxx, you typically create two methods: getXxx( ) and setXxx( ). Note that the first letter after get or set is automatically lowercased to produce the property name. The type produced by the “get” method is the same as the type of the argument to the “set” method. The name of the property and the type for the “get” and “set” are not related. - For a boolean property, you can use the “get” and “set” approach above, but you can also use “is” instead of “get.” - Ordinary methods of the Bean don’t conform to the above naming convention, but they’re public. - For events, you use the “listener” approach. It’s exactly the same as you’ve been seeing: addFooBarListener(FooBarListener) and removeFooBarListener(FooBarListener) to handle a FooBarEvent. Most of the time the built-in events and listeners will satisfy your needs, but you can also create your own events and listener interfaces. Point 1 above answers a question about something you might have noticed in the change from Java 1.0 to Java 1.1: a number of method names have had small, apparently meaningless name changes. Now you can see that most of those changes had to do with adapting to the “get” and “set” naming conventions in order to make that particular component into a Bean. We can use these guidelines to create a simple Bean: //: Frog.java // A trivial Java Bean package frogbean; import java.awt.*; import java.awt.event.*; class Spots {} public class Frog { private int jumps; private Color color; private Spots spots; private boolean jmpr; public int getJumps() { return jumps; } public void setJumps(int newJumps) { jumps = newJumps; } public Color getColor() { return color; } public void setColor(Color newColor) { color = newColor; } public Spots getSpots() { return spots; } public void setSpots(Spots newSpots) { spots = newSpots; } public boolean isJumper() { return jmpr; } public void setJumper(boolean j) { jmpr = j; } public void addActionListener( ActionListener l) { //... } public void removeActionListener( ActionListener l) { // ... } public void addKeyListener(KeyListener l) { // ... } public void removeKeyListener(KeyListener l) { // ... } // An "ordinary" public method: public void croak() { System.out.println("Ribbet!"); } } ///:~ First, you can see that it’s just a class. Usually, all your fields will be private, and accessible only through methods. Following the naming convention, the properties are jumps, color, spots, and jumper (notice the change in case of the first letter in the property name). Although the name of the internal identifier is the same as the name of the property in the first three cases, in jumper you can see that the property name does not force you to use any particular name for internal variables (or, indeed, to even have any internal variable for that property). The events this Bean handles are ActionEvent and KeyEvent, based on the naming of the “add” and “remove” methods for the associated listener. Finally, you can see that the ordinary method croak( ) is still part of the Bean simply because it’s a public method, not because it conforms to any naming scheme. Extracting BeanInfo with the Introspector One of the most critical parts of the Bean scheme occurs when you drag a Bean off a palette and plop it down on a form. The application builder tool must be able to create the Bean (which it can do if there’s a default constructor) and then, without access to the Bean’s source code, extract all the necessary information to create the property sheet and event handlers. Part of the solution is already evident from the end of Chapter 11: Java 1.1 reflection allows all the methods of an anonymous class to be discovered. This is perfect for solving the Bean problem without requiring you to use any extra language keywords like those required in other visual programming languages. In fact, one of the prime reasons that reflection was added to Java 1.1 was to support Beans interface for everyone to use, not only to make Beans simpler to use but also to provide a standard gateway to the creation of more complex Beans. This interface is the Introspector class, and the most important method in this class is the static getBeanInfo( ). You pass a Class handle to this method and it fully interrogates that class and returns a BeanInfo object that you can then dissect to find properties, methods, and events. Usually you won’t care about any of this – you’ll probably get most of your Beans off the shelf from vendors, and you don’t need to know all the magic that’s going on underneath. You’ll simply drag your Beans onto your form, then configure their properties and write handlers for the events you’re interested in. However, it’s an interesting and educational exercise to use the Introspector to display information about a Bean, so here’s a tool that does it (you’ll find it in the frogbean subdirectory): //: BeanDumper.java // A method to introspect a Bean import java.beans.*; import java.lang.reflect.*; public class BeanDumper { public static void dump(Class bean){ BeanInfo bi = null; try { bi = Introspector.getBeanInfo( bean, java.lang.Object.class); } catch(IntrospectionException ex) { System.out.println("Couldn't introspect " + bean.getName()); System.exit(1); } PropertyDescriptor[] properties = bi.getPropertyDescriptors(); for(int i = 0; i < properties.length; i++) { Class p = properties[i].getPropertyType(); System.out.println( "Property type:\n " + p.getName()); System.out.println( "Property name:\n " + properties[i].getName()); Method readMethod = properties[i].getReadMethod(); if(readMethod != null) System.out.println( "Read method:\n " + readMethod.toString()); Method writeMethod = properties[i].getWriteMethod(); if(writeMethod != null) System.out.println( "Write method:\n " + writeMethod.toString()); System.out.println("===================="); } System.out.println("Public methods:"); MethodDescriptor[] methods = bi.getMethodDescriptors(); for(int i = 0; i < methods.length; i++) System.out.println( methods[i].getMethod().toString()); System.out.println("======================"); System.out.println("Event support:"); EventSetDescriptor[] events = bi.getEventSetDescriptors(); for(int i = 0; i < events.length; i++) { System.out.println("Listener type:\n " + events[i].getListenerType().getName()); Method[] lm = events[i].getListenerMethods(); for(int j = 0; j < lm.length; j++) System.out.println( "Listener method:\n " + lm[j].getName()); MethodDescriptor[] lmd = events[i].getListenerMethodDescriptors(); for(int j = 0; j < lmd.length; j++) System.out.println( "Method descriptor:\n " + lmd[j].getMethod().toString()); Method addListener = events[i].getAddListenerMethod(); System.out.println( "Add Listener Method:\n " + addListener.toString()); Method removeListener = events[i].getRemoveListenerMethod(); System.out.println( "Remove Listener Method:\n " + removeListener.toString()); System.out.println("===================="); } } // Dump the class of your choice: public static void main(String[] args) { if(args.length < 1) { System.err.println("usage: \n" + "BeanDumper fully.qualified.class"); System.exit(0); } Class c = null; try { c = Class.forName(args[0]); } catch(ClassNotFoundException ex) { System.err.println( "Couldn't find " + args[0]); System.exit(0); } dump(c); } } ///:~ BeanDumper.dump( ) is the method that does all the work. First it tries to create a BeanInfo object, and if successful calls the methods of BeanInfo that produce information about properties, methods, and events. In Introspector.getBeanInfo( ), you’ll see there is a second argument. This tells the Introspector where to stop in the inheritance hierarchy. Here, it stops before it parses all the methods from Object, since we’re out. If you invoke BeanDumper on the Frog class like this: java BeanDumper doesn’t occur is when the property name begins with more than one capital letter in a row.) And remember that the method names you don’t have any other information except the object (again, a feature of reflection). A more sophisticated Bean This next example is slightly more sophisticated, albeit frivolous. It’s a canvas that draws a little circle around the mouse whenever the mouse is moved. When you press the mouse, the word “Bang!” appears in the middle of the screen, and an action listener is fired. The properties you can change are the size of the circle as well as the color, size, and text of the word that is displayed when you press the mouse. A BangBean also has its own addActionListener( ) and removeActionListener( ) so you can attach your own listener that will be fired when the user clicks on the BangBean. You should be able to recognize the property and event support: //: BangBean.java // A graphical Bean package bangbean; import java.awt.*; import java.awt.event.*; import java.io.*; import java.util.*; public class BangBean extends Canvas implements Serializable { protected int xm, ym; protected int cSize = 20; // Circle size protected String text = "Bang!"; protected int fontSize = 48; protected Color tColor = Color.red; protected ActionListener actionListener; public BangBean() { addMouseListener(new ML()); addMouseMotionListener(new MML()); } public int getCircleSize() { return cSize; } public void setCircleSize(int newSize) { cSize = newSize; } public String getBangText() { return text; } public void setBangText(String newText) { text = newText; } public int getFontSize() { return fontSize; } public void setFontSize(int newSize) { fontSize = newSize; } public Color getTextColor() { return tColor; } public void setTextColor(Color newColor) { tColor = newColor; } public void paint(Graphics g) { g.setColor(Color.black); g.drawOval(xm - cSize/2, ym - cSize/2, cSize, cSize); } // This is a unicast listener, which is // the simplest form of listener management: public void addActionListener ( ActionListener l) throws TooManyListenersException { if(actionListener != null) throw new TooManyListenersException(); actionListener = l; } public void removeActionListener( ActionListener l) { actionListener = null; }(); // Call the listener's method: if(actionListener != null) actionListener.actionPerformed( new ActionEvent(BangBean.this, ActionEvent.ACTION_PERFORMED, null)); } } class MML extends MouseMotionAdapter { public void mouseMoved(MouseEvent e) { xm = e.getX(); ym = e.getY(); repaint(); } } public Dimension getPreferredSize() { return new Dimension(200, 200); } // Testing the BangBean: public static void main(String[] args) { BangBean bb = new BangBean(); try { bb.addActionListener(new BBL()); } catch(TooManyListenersException e) {} Frame aFrame = new Frame("BangBean Test"); aFrame.addWindowListener( new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); aFrame.add(bb, BorderLayout.CENTER); aFrame.setSize(300,300); aFrame.setVisible(true); } // During testing, send action information // to the console: static class BBL implements ActionListener { public void actionPerformed(ActionEvent e) { System.out.println("BangBean action"); } } } ///:~ The first thing you’ll notice is that BangBean implements the Serializable interface. This means that the application builder tool can “pickle” all the information for the BangBean using serialization after the program designer has adjusted the values of the properties. When the Bean is created as part of the running application, these “pickled” properties are restored so that you get exactly what you designed. You can see that all the fields are private, which is what you’ll usually do with a Bean – allow access only through methods, usually using the “property” scheme. When you look at the signature for addActionListener( ), you’ll see that it can throw a TooManyListenersException. This indicates that it is unicast, which means it notifies only one listener when the event occurs. Ordinarily, you’ll use multicast events so that many listeners can be notified of an event. However, that runs into issues that you won’t be ready for until the next chapter, so it will be revisited there (under the heading “Java Beans revisited”). A unicast event sidesteps the problem. When you press the mouse, the text is put in the middle of the BangBean, and if the actionListener field is not null, its actionPerformed( ) is called, creating a new ActionEvent object in the process. Whenever the mouse is moved, its new coordinates are captured and the canvas is repainted (erasing any text that’s on the canvas, as you’ll see). The main( ) is added to allow you to test the program from the command line. When a Bean is in a development environment, main( ) will not be used, but it’s helpful to have a main( ) in each of your Beans because it provides for rapid testing. main( ) creates a Frame and places a BangBean within it, attaching a simple ActionListener to the BangBean to print to the console whenever an ActionEvent occurs. Usually, of course, the application builder tool would create most of the code that uses the Bean. When you run the BangBean through BeanDumper or put the BangBean inside a Bean-enabled development environment, you’ll notice that there are many more properties and actions than are evident from the above code. That’s because BangBean is inherited from Canvas, and Canvas is a Bean, so you’re seeing its properties and events as well. Packaging a Bean Before you can bring a Bean into a Bean-enabled visual builder tool, it must be put into the standard Bean container, which is a JAR (Java ARchive), “It’s a Bean.” Without the third line, the program builder tool will not recognize the class as a Bean. The only tricky part is that you must make sure that you get the proper path in the “Name:” field. If you look back at BangBean.java, you’ll see it’s in package bangbean (and thus in a subdirectory called “bangbean” that’s’ve put the manifest in a file called BangBean.mf. You might wonder “What about all the other classes that were generated when I compiled BangBean.java?” Well, they all ended up inside the bangbean subdirectory, and you’ll see that the last argument for the above jar command line is the bangbean subdirectory. When you give jar the name of a subdirectory, it packages that entire subdirectory into the jar file (including, in this case, the original BangBean.java source-code file – you might not choose to include the source with your own Beans). In addition, if you turn around and unpack the JAR file you’ve just created, you’ll discover that your manifest file isn’t inside, but that jar has created its own manifest file (based partly on yours) called MANIFEST.MF and placed it inside the subdirectory META-INF (for “meta-information”). If you open this manifest file you’ll don’t need to worry about any of this, and if you make changes you can just modify your original manifest file and re-invoke jar to create a new JAR file for your Bean. You can also add other Beans to the JAR file simply by adding their information to your manifest. One thing to notice is that you’ll. Once you have your Bean properly inside a JAR file you can bring it into a Beans-enabled program-builder environment. The way you do this varies from one tool to the next, but Sun provides a freely-available test bed for Java Beans in their “Beans Development Kit” (BDK) called the “beanbox.” (Download the BDK from.) To place your Bean in the beanbox, copy the JAR file into the BDK’s “jars” subdirectory before you start up the beanbox. More complex Bean support You can see how remarkably simple it is to make a Bean. But you aren’t limited to what you’ve seen here. The Java Bean design provides a simple point of entry but can also scale to more complex situations. These situations are beyond the scope of this book but they will be briefly introduced here. You can find more details at. One place where you can add sophistication is with properties. The examples above have shown only single properties, but it’s also possible to represent multiple properties in an array. This is called an indexed property . You simply provide the appropriate methods (again following a naming convention for the method names) and the Introspector recognizes an indexed property so using a PropertyChangeEvent, and they can throw a ProptertyVetoException to prevent the change from happening and to restore the old values. You can also change the way your Bean is represented at design time: - You can provide a custom property sheet for your particular Bean. The ordinary property sheet will be used for all other Beans, but yours is automatically invoked when your Bean is selected. - You can create a custom editor for a particular property, so the ordinary property sheet is used, but when your special property is being edited, your editor will automatically be invoked. - You can provide a custom BeanInfo class for your Bean that produces information that’s different from the default created by the Introspector. - It’s also possible to turn “expert” mode on and off in all FeatureDescriptors to distinguish between basic features and more complicated ones. More to Beans There’s another issue that couldn’t be addressed here. Whenever you create a Bean, you should expect that it will be run in a multithreaded environment. This means that you must understand the issues of threading, which will be introduced in the next chapter. You’ll find a section there called “Java Beans revisited” that will look at the problem and its solution. There are no comments yet. Be the first to comment!
http://www.codeguru.com/java/tij/tij0150.shtml
CC-MAIN-2017-34
refinedweb
3,237
54.73
!). After suffering from coder's block (if that's the programmer's equivalent of writer's block) for weeks, I finally got started the past week on the new object model remapping infrastructure. I spent most of the week going in circles. Writing code, deleting it, writing more code, deleting it again. However, I think I figured it out now. To test that theory I decided to write a blog entry about it. Explaining something is usually a good way to find holes in your own understanding. I'm going to start out by describing the problem. Then I'll look at the existing solution and it's limitations. Finally I'll explain the new approach I came up with. If everything goes well, by end the of the entry you'll be wondering "why did it take him so long to come up with this, it's obvious!". That means I came up with the right solution and I explained it well. Note that I'm ignoring ghost interfaces in this entry. The current model will stay the same. For details, see the previous entries on ghost interfaces. What are the goals? The .NET model (relevant types only): For comparison, here is the Java model that we want to map to the .NET model: There are several possible ways to go (I made up some random names): What's wrong with equivalance? Both J# and the current version of IKVM use equivalence (although many of the details differ and J# doesn't consider Throwable and System.Exception to be equivalent) and it works well. So why change it? There are four advantages to the mixed model: class ThrowableToString { public static void main(String[] args) throws Exception { String s = "cli.System.NotImplementedException"; Object o = Class.forName(s).newInstance(); Throwable t = (Throwable)o; System.out.println(o.toString()); System.out.println(t.toString()); } } Obvously, both lines should be the same. Another (at the moment theoretical) problem is that it is legal for code in the java.lang package to call Object.clone or Object.finalize (both methods are protected, but in Java, protected also implies package access), currently that wouldn't work. Here is the mixed model I ended up with: I called it mixed because it combines some features of equivalence and extension. For example, references of type java.lang.Object are still compiled as System.Object (like in the equivalence model), but the non-remapped Java classes extend java.lang.Object (like in the extension model). java.lang.Object will contain all methods of the real java.lang.Object and in addition to those also a bunch of static helper methods that allow you to call java.lang.Object instance methods on System.Object references. The helper methods will test if the passed object is either a java.lang.Object or a java.lang.Throwable (for virtual methods) and if so, it will downcast and call the appropriate method on those classes, if not, it will perform an alternative action (that was specified in map.xml when this classpath.dll was compiled). Object.finalize requires some special treatment since we don't want java.lang.Object.finalize to override System.Object.Finalize because that would cause all Java objects to end up on the finalizer queue and that's very inefficient. So the compiler will contain a rule to override System.Object.Finalize when a Java class overrides java.lang.Object.finalize. I glossed over a lot of details, but those will have to wait for next time. FOSDEM 2004 Finally a short note on FOSDEM (Free and Open Source Software Developer's Meeting). Last weekend I visisted FOSDEM in Brussels. I enjoyed seeing Dalibor, Chris, Mark, Sascha and Patrik again and I also enjoyed meeting gjc hackers Tom Tromey and Andrew Haley for the first time. Mark wrote up a nice report about it. If you haven't read it yet, go read it now. All in all a very good and productive get-together. Stuart pointed out the F.A.Q. was out of date, so I updated it a little bit. He also asked: Speaking of which, I noticed while perusing the FAQ that the JIT compiler is included in IK.VM.NET.dll which means it's required for running even statically-compiled code. For apps that don't use clever classloading tricks, the JIT isn't needed at all when everything's been statically compiled. Would it be possible to separate the JIT out into a different DLL to reduce the necessary dependencies for a statically-compiled Java app?Sure, the 275K of IK.VM.NET.dll is miniscule compared to the 3Mb of classpath.dll, but it's the principle of the thing ;) This is definitely something I want to do. In fact, I would also like to have the option to only include parts of the Classpath code when statically compiling a library. So instead of having a dependency on classpath.dll, you'd suck in only the required classes. Jikes I upgraded to Jikes 1.19 that was released recently. It didn't like the netexp generated stub jars (which is good, because it turns out they were invalid), so I fixed netexp to be better behaved in what it emits. Jikes didn't like the fact that the inner interfaces that I created had the ACC_STATIC modifier set at the class level, rightly so, but the error message it came up with was very confusing. Along the way I also discovered that it is illegal for constructors to be marked native (kind of odd, I don't really see why you couldn't have a native constructor). So I made them non-native and have a simple method body that only contains a return. That isn't correct either (from the verifier's point of view) and I guess I should change it to throw an UnsatifiedLinkError. That would also be more clear in case anyone tries to run the stubs on a real JVM. Jikes 1.19 has a bunch of new pedantic warnings (some enabled by default). I don't think this is a helpful feature at the moment. Warnings are only useful if you can make sure you don't get any (by keeping your code clean), but when you already have an existing codebase, this is very hard and in the case of Classpath, where you have to implement a spec, you often don't have the option to do the right thing. So I would like to have to option to have lint like comment switches to disable specific warnings in a specific part of the code. Bytecode Bug I also did some work to reduce the number of failing Mauve testcases on IKVM and that caused me to discover that the bit shifting instructions were broken (oops!). On the JVM the shift count is always masked by the number of bits (-1) in the integral type you're shifting. So for example: int i = 3; System.out.println(i << 33); This prints out 6 ( 3 << (33 & 31)). On the CLI, if the shift count is greater than the number of bits in the integral type, the result is undefined. I had to fix the bytecode compiler to explicitly do the mask operation. Serialization Brian J. Sletten reported on the mailing list that deserialization was extremely slow. That was caused by the fact that reflection didn't cache the member information for statically compiled Java classes or .NET types. I fixed that and after that I also made some improvements to GNU Classpath's ObjectInputStream to speed it up even more. It's still marginally slower than the Sun JRE, but the difference shouldn't cause any problems. Snapshot I made a new snapshot. Here's what's new: I didn't get around yet to removing the "virtual helpers" and introducing base classes for non-final remapped types (java.lang.Object and java.lang.Throwable). Most of this is clean up and restructuring to facilitate the next major change, removing the "virtual helpers" and introducing base classes for non-final remapped types (java.lang.Object and java.lang.Throwable). I switched from blogX to dasBlog. Mainly because Chris is no longer maintaining blogX, but also because I wanted some new functionality. Hopefully the transition will be smooth, but if something is not working please let me know. namespace System { class Object { } class ValueType { } struct Int32 { } class String { } class A : string { public static void Main() { string s = new A(); } } } If you compile this and then run peverify on the resulting executable, it will complain about every single type. Here is what's going on: An obvious explanation for this (broken) behavior of the C# compiler is that Microsoft use the C# compiler to build mscorlib.dll. When compiling mscorlib.dll, the above behavior makes perfect sense. Of course, when you're not compiling mscorlib.dll, it doesn't make much sense and, in fact, it violates the language specification. For example, section 4.2.3 says: The keyword string is simply an alias for the predefined class System.String. string System.String "predefined" being the important word here. I guess it would be relatively easy to fix the compiler, but I think that there is actually an underlying problem: The C# language doesn't have support for assembly identities. Besides the above corner cases, this also causes problems in more real world situations.
http://weblog.ikvm.net/default.aspx?date=2004-03-10
CC-MAIN-2019-39
refinedweb
1,578
66.23
10 May 2011 11:56 [Source: ICIS news] SINGAPORE (ICIS)--Polyethylene (PE) and polypropylene (PP) prices in ?xml:namespace> Prices of PP raffia grade, PP film grade, high density PE (HDPE) film grade and linear low density PE (LLDPE) film grade for Gulf Cooperation Council (GCC) were quoted on Tuesday at Pakistan Rupee (PRs) 85.50/lbs ($1.01/lbs) DEL (delivered), PRs90.25/lbs DEL and PRs72.25/lbs DEL, and PRs72.20/lbs DEL, respectively, they said. “Poor crude prices” pulled prices down, said a local trader said, adding that “PE demand is already pretty bad”. Converters are taking in less imported PE and PP, preferring to consume current inventories as they expect prices to fall, said an industry source. Meanwhile, low density PE (LDPE) prices were stable at PRs91-95/lbs US crude futures had a record sell-off last week, plunging by more than $17/bbl on worries over demand. At 18:00 hours “Strong rumours of an Indian major dropping PP prices locally also dampened the sentiment here,” said another trader. The price outlook remained bleak as converters are more cautious on fresh purchases. “It is safer for us to remain at the sidelines,” said a converter. ($1 = PRs84
http://www.icis.com/Articles/2011/05/10/9458369/pakistan-pe-pp-prices-slump-5-on-weak-crude.html
CC-MAIN-2015-18
refinedweb
204
72.66
On 05/15/2018 10:36 AM, Konstantin Khlebnikov wrote: On 15.05.2018 20:19, Nagarathnam Muthusamy wrote: On 04/24/2018 10:36 PM, Konstantin Khlebnikov wrote: On 23.04.2018 20:37, Nagarathnam Muthusamy wrote: On 04/05/2018 12:02 AM, Konstantin Khlebnikov wrote: On 05.04.2018 01:29, Eric W. Biederman wrote: Nagarathnam Muthusamy <nagarathnam.muthusamy@xxxxxxxxxx> writes: On 04/04/2018 12:11 PM, Konstantin Khlebnikov wrote: Each process have different pids, one for each pid namespace it belongs. When interaction happens within single pid-ns translation isn't required. More complicated scenarios needs special handling. For example: - reading pid-files or logs written inside container with pid namespace - attaching with ptrace to tasks from different pid namespace - passing pids across pid namespaces in any kind of API Currently there are several interfaces that could be used here: Pid namespaces are identified by inode number of /proc/[pid]/ns/pid. Using the inode number in interfaces is not an option. Especially not withou referencing the device number for the filesystem as well. This is supposed to be single-instance fs, not part of proc but referenced but its magic "symlinks". Device numbers are not mentioned in "man namespaces". Pids for nested Pid namespaces are shown in file /proc/[pid]/status. In some cases conversion pid -> vpid could be easily done using this information, but backward translation requires scanning all tasks. Unix socket automatically translates pid attached to SCM_CREDENTIALS. This requires CAP_SYS_ADMIN for sending arbitrary pids and entering into pid namespace, this expose process and could be insecure. This patch adds new syscall for converting pids between pid namespaces: pid_t translate_pid(pid_t pid, int source_type, int source,  int target_type, int target); @source_type and @target_type defines type of following arguments: TRANSLATE_PID_CURRENT_PIDNS - current pid namespace, argument is unused TRANSLATE_PID_TASK_PIDNS - task pid-ns, argument is task pid I believe using pid to represent the namespace has been already discussed in V1 of this patch in after which we moved on to fd based version of this interface. Or in short why is the case of pids important? You Konstantin you almost said why they were important in your message saying you were going to send this one. However you don't explain in your description why you want to identify pid namespaces by pid. Open of /proc/[pid]/ns/pid requires same permissions as ptrace, pid based variant doesn't have such restrictions. Can you provide more information on usecase requiring PID translation but not used for tracing related purposes? Any introspection for [nested] containers. It's easier to work when you have all information when you don't have any. For example our CMS allows to start nested sub-container (or even deeper) by request from any container and have to tell back which pid task is have. And it could translate any pid inside into accessible by client and vice versa. I still dont get the exact reason why PID based approach to identify the namespace during pid translation process is absolutely required compared to fd based approach. As I told open(/proc/%d/ns/pid) have security restrictions - same uid/CAP_SYS_PTRACE/whatever Pidns-fd holds pid-namespace and without restrictions could be abused. Pid based API is racy but always available without any restrictions. I get that Pid based API is available without any restrictions but do we have any existing usecase which requires Pid based API but cannot use Pidns-fd based API? Most of the usecases discussed in this thread deals with introspection of a process by another process and I believe that security requirement for opening (/proc/%d/ns/pid) is required for all such usecases. In other words, Why would a process which does not belong to same uid of the process observed or have CAP_SYS_PTRACE be allowed to translate PID? Thanks, Nagarathnam. From your version of TranslatePid in I see that you are going through the trouble of forking a process and sending SMC_CREDENTIALS for pid translation. Even your existing API could be extremely simplified if translate_pid based on file descriptors make it to the gate and I believe from the last discussion it was almost there On a side note, can we have the types TRANSLATE_PID_CURRENT_PIDNS and TRANSLATE_PID_FD_PIDNS integrated first and then possibly extend the interface to include TRANSLATE_PID_TASK_PIDNS in future? I don't see reason for this separation. Pids and pid namespaces are part of the API for a long time. If you are talking about the translate_pid API proposed, I believe the V4 proposed under had only fd based API before a mix of PID and fd based is proposed in V5. Again, I was just wondering if we can get the FD based approach in first and then extend the API to include PID based approach later as fd based approach could provide a lot of immediate benefits? Thanks, Nagarathnam. Thanks, Nagarathnam. Most pid-based syscalls are racy in some cases but they are here for decades and everybody knowns how to deal with it. So, I've decided to merge both worlds in one interface which clearly tells what to expect.
http://lkml.iu.edu/hypermail/linux/kernel/1805.1/06637.html
CC-MAIN-2022-05
refinedweb
859
60.14
In the previous article we installed Python and set up our virtual environment. We then used pandas-datareader directly in the python terminal in order to import some equities OHLC data and plot five years of the adjusted close price. This was accomplished in a few lines of code. However, once we closed the terminal we lost all the data. In this tutorial we will be setting up a prototyping environment using Jupyter notebook to analyse our data in a reproducible manner. The libraries we are using for this tutorial are: - Matplotlib v3.5 - Pandas v1.4 - pandas-datareader v0.10 - jupyter v1.0 - plotly v5.6 - ipykernel v6.4 For the purpose of this tutorial we created a Python environment using Python v3.8 Anaconda comes with Jupyter notebook installed and ready to go. Once you are inside the base environment you can simply type jupyter notebook from within any directory and a window will open in your browser showing you all the files and folders located in that directory. From this page you can create a notebook by clicking "new". This will run from your base anaconda environment and therefore have access to all the python packages installed in that environment. However, you will recall in the previous article that we created a virtual environment into which we installed Matplotlib, Pandas and pandas-datareader. If you were to try and import pandas-datareader into a notebook from the base anaconda environment you would get the following error ModuleNotFoundError. This is because, by design, the base anaconda environment doesn't have access to the libraries installed in the virtual environment. Setting up Jupyter Notebook with ipykernel One of the simplest ways to access the libraries in your virtual environment through Jupyter notebook is to use ipykernel. This package is installed directly into your virtual environment and will enable you to choose the kernel approriate to your virtual environment from within the Jupyter notebook interface. Let's have a look at one of the easiest ways to accomplish this. We start by creating a virtual environment, activating it and installing our required libraries. If you have been following along with this article series you could simply activate the py3.8 environment we created last time. If not you will need to create a virtual environment an install Matplotlib, Pandas and pandas-datareader. (base)$ conda activate py3.8 Once inside the virtual environment you need to install ipykernel (py3.8)$ conda install ipykernel Now all you need to do is register the kernel specification with Jupyter. You can name your kernel anything you like after the --name= parameter. (py3.8)$ ipython kernel install --user --name=py3.8 Now that we have a working kernel for our virtual environment all we need to do is open Jupyter notebook from our anaconda base environment. Then we can create a new notebook specifying the kernel you have just created. Now when you call import pandas_datareader.data as web you won't get an error. Open a new terminal window in the base anaconda environment (Hint: Check that (base) is displayed in the terminal window before the user information.) Now type jupyter notebook to open jupyter in your browser. Once open click new and select your new kernel from the dropdown. Importing and plotting data We will begin by recreating our quick analysis from the previously article, except this time all our work will be incorported into a Jupyter notebook which we can access repeatedly and build upon over time. A full overview of the Jupyter commands is outside the scope of this article as we will mainly be focusing on creating financial visualizations. There are some great tutorials available to discover the full potential of the software dataquest and datacamp are among them. Briefly, the most frequently used commands are: - Enter edit mode with enter - Enter command mode with esc - Once in command mode (after pressing esc) Mwill transform a cell to Markdown so you can add text Ywill transform a cell to code Aand Bwill add a new cell above or below - Move up and down cells with Upand Downarrow keys DDwill delete the cell Hwill open the keyboard shortcuts menu We start by importing our libraries: type the following into the first cell and press shift+enter to run the cell and create a new one underneath import matplotlib.pyplot as plt import pandas as pd import pandas_datareader.data as web from datetime import datetime as dt We now define our date range and import our data: start = dt(2016, 11,1) end = dt(2021, 11, 1) aapl = web.DataReader("AAPl", "yahoo", start, end) To view the first few lines of our DataFrame type aapl.head(). Now we simply plot as before by typing aapl.plot(y="Adj Close") into a new cell and pressing shift+enter. Plotting Candlesticks in Jupyter Candlestick charts are thought to date back to Japan in the late 1800s. They are composed of the real body, the area between the open and close price and the wicks or shadows, the excursions above and below the real body that illustrate the highest and lowest prices of an asset in the time period represented. If the asset closes at a price higher than the opening price the body is usually unfilled or hollow. If the asset closes lower than it opened the body of the candlestick may be filled or solid. The colour of the candlestick is representaive of the price movement for the period of time represented. Black or red candlesticks indicate that the closing price is lower than the previous time point and a green or white candle means that the closing price is higher than the previous time point. In practice the color and fill of the candlestick can be designated by the user. There are several different ways to create candlestick plots in Python. You can create your own script in Matplotlib using boxplots but there are also a number of different open source libraries such as mplfinance, bqplot, Bokeh and Plotly. There is an excellent article offering an overview of each here. In this article we will be using Plotly to create our candlestick plots. The Plotly Python graphing library offers over 40 different chart types for statistical, financial, scientific and 3-dimensional uses. Early versions of this charting library required users to sign up for access with an API key each time they created a plot. There was also the option to publish the images to an online repository of charts by choosing between online and offline modes of operation. Since version four of the software it is no longer necessary to have an account or internet connection and no payment is required to use Plotly.py. The latest version of the software at time of writing is version 5.6.0. In order to use Plotly we will need to install it into our virtual environment. In the terminal inside the same virtual environment that you used to create your kernel install Plotly through conda. (py3.8)$ conda install -c plotly plotly Once the install is complete you need to restart the kernel in your Jupyter notebook. In the menu select Kernel then Restart and Run all from the dropdown. This will re run all the cells in your notebook and refresh the Kernel to include access to Plotly. To simplify the images we will begin by looking at a month of OHLC data. In a new cell in your notebook enter the following code to create a new DataFrame "goog" with a month of OHLC Google data from Yahoo. start = dt(2021, 10,1) end = dt(2021, 11, 1) goog = web.DataReader("GOOG", "yahoo", start, end) You can check your DataFrame by typing goog.head(). We now need to import Plotly into our notebook. Best practice is to place all imports at the top of your code, in alphabetical order. This ensures that when anyone is reviewing your code they can see what libraries have been added and when a method is called from any of the libraries it is easy to determine where that method originated. You can also alias your imports as we have here. This means that whenever you want to use a method from the libraries you don't need to type plotly.graph_objects you can simply type the alias, in this case, "go". Add import plotly.graph_objects as go to the first cell in your notebook underneath the pandas_datareader import. Plotly comes with an interactive candlestick figure prebuilt. All you need to do is define your data and call the figure. The Candlestick method requires you to specify your x axis data, the open, high, low and close price in order to generate the figure. We will also add a name to our plot so that if we choose to add additional trendlines to our figure this name will appear in the legend. # define the data candlestick = go.Candlestick( x=goog.index, open=goog['Open'], high=goog['High'], low=goog['Low'], close=goog['Close'], name="OHLC" ) # create the figure fig = go.Figure(data=[candlestick]) # plot the figure fig.show() The figure generated is interactive. You can hover over any of the candlesticks and see the data that created it. There is also a range slider at the bottom of the figure allowing you to zoom into a specific data range. This can be disabled by adding the following line above fig.show(): fig.update_layout(xaxis_rangeslider_visible=False) Adding Trendlines to Plotly Candlestick Charts The Figure() method takes a keyword data which accepts a list. This allows us to add trendlines to our graph by defining additional variables containing the Scatter method. Let's see this in action by overlaying a five day moving average to our candlestick plot. To begin we first need to calculate the moving average, this can be done by chaining the Pandas rolling() and mean() methods. In a new cell we will add a Series to our "goog" DataFrame containing the value of the five day moving average at each time point. goog['MA5'] = goog.Close.rolling(5).mean() The commmand goog.head() shows the new column with the value appearing in the fifth row as expected. Now we can create our scatter object by defining our x and y data points. We can create a trendline rather than markers by using the keyword line and control the color and width of the line. scatter = go.Scatter( x=goog.index, y=goog.MA5, line=dict(color='black', width=1), name="5 day MA" ) Once we have defined our data we can add it to our plot by adding the scatter variable to the list in the data keyword. fig = go.Figure(data=[candlestick, scatter]) fig.show() Adding Volume to Plotly Candlestick Charts A common modification to a candlestick chart is the addition of volume. This can be accomplished in two ways; as an overlay on the existing chart with a secondary axis or underneath the original candlestick chart. Both options require us to make use of the Plotly make_subpplots class. The first thing we need to do is to import the class at the top of our notebook. To keep with best practice, underneath the plotly.graph_objects import add the following line: from plotly.subplots import make_subplots. Now we can create a figure with secondary axis for the volume. First we create the secondary axis then we add a trace for the candlestick OHLC data and a scatter for the five day moving average. This can be done by invoking the candlestick and scatter variables we created earlier. Then we add a trace for a bar chart containing the volume data. We can control the transparency of the bar chart with the opacity keyword and the colour using the marker_colour keyword. Finally we turn off the gridlines for the secondary axis to avoid confusion in the final chart and display the figure. # create a figure with a secondary axis fig = make_subplots(specs=[[{"secondary_y": True}]]) fig.add_trace( candlestick, secondary_y=False ) fig.add_trace( scatter, secondary_y=False ) fig.add_trace( go.Bar(x=goog.index, y=goog['Volume'], opacity=0.1, marker_color='blue', name="volume"), secondary_y=True ) fig.layout.yaxis2.showgrid=False fig.show() If you would prefer to keep the volume as a seperate chart this can also be very easily created using Plotly. All we need to do is add an additional plot in the call to make_subplots and define the position of each trace within the subplots. We will begin by defining our subplot using rows and columns, in this case we want two figures on top of each other so we have one column and two rows. We share the x axis as the time series dates are the common factor between the two plots. We give an additional vertical space of 0.1 to accommodate our subplot titles, which we then define. Finally we specify how high we would like our rows to be in relation to each other. The total value of the row heights will be normalised so that they sum to 1. After defining our subplots we simply add each trace as we did previously specifying the position of each trace within the subplots. We then disable the range slider to avoid confusion and display the final figure. # Create subplots fig = make_subplots( rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.1, subplot_titles=('OHLC', 'Volume'), row_heights=[0.7, 0.3] ) # Plot candlestick on 1st row fig.add_trace( candlestick, row=1, col=1 ) # Plot scatter on 1st row fig.add_trace( scatter, row=1, col=1 ) # Plot Bar trace for volumes on 2nd row without legend fig.add_trace( go.Bar( x=goog.index, y=goog['Volume'], opacity=0.1, marker_color='blue', showlegend=False ), row=2, col=1 ) # Do not show OHLC's rangeslider plot fig.update(layout_xaxis_rangeslider_visible=False) fig.show() Next steps Following on from our article series on setting up a trading environment with Python, this quick tutorial has provided a example of working with Jupyter notebooks and producing candlesticks charts. In later articles we will continue looking at data providers and how to access their data through Jupyter notebooks to create a research prototyping environment.
https://quantstart.com/articles/creating-an-algorithmic-trading-prototyping-environment-with-jupyter-notebooks-and-plotly/
CC-MAIN-2022-33
refinedweb
2,365
63.8
This post originally appeared on my personal blog. As the pandemic rages on, I need to distract myself. What better way than by diving into some Ruby on Rails? I focused entirely on front-end early in my career, but have come to enjoy Ruby and Ruby on Rails for the back-end. Ruby's clear syntax coupled with Rails' convention over configuration makes it a joy to code with. I still have much to learn, but all this makes the learning fun! Imagine, fun learning and I don't even need a magic school bus. Rails itself has many parts to understand, so I wanted to focus on one piece at a time. So this is about the part of a Rails app that's been the toughest for me to understand, but also one of the most important - the Rails model! This is a high-level overview of what models Models are, how they work, and their basic functions. I couldn't cover everything, so I chose the most important details based on my own experiences using Rails for the last three years. I hope it helps others build a solid foundation of understanding that lets them more easily learn the nitty-gritty details of more complex models. With that, let's begin! What is a Rails Model? A Model is part of the classic Model-View-Controller design pattern and represents the data being stored on the database. Databases will hold all kinds of different records with their own info and rules to follow. In Rails, each record type has a model to keep all that info and logic tidy and organized. Let's say you were making a library app that lets people check out books. One record type you'd want to keep track of is, obviously, the books. So your book model would help you manage important things like: - What info are we storing in the database about each book? It'd likely be the title, author, publication date, ISBN, page length, and others. Some of this data may need to be validated, like making sure it's present or is a certain type of data. - What other record types are connected to the book? We may have separate database records (and therefore models) to tracking different book genres or authors. Or we'll have different sections of the library that contain many books. The model will tell us one book record will contain one or more genres and belongs to one or more sections. - Does this model have any special methods? Maybe there's a method that calculates if the book is checked out of not. Or if its genres are appropriate for younger readers. Info like this relies on database info, but can't be stored in one since they some extra business logic. - Are there any extra tasks linked to this model? When a book is available, the model may want to send a notification to whoever wants to read it next. The model won't have the notification code itself but will say when it's triggered and how. This post will go into more specific ways models do all these things, but not too much detail. I hope to build a basic understanding of models for readers so further research goes smoother. ActiveRecord, Models, and Migrations Before I go on, there are a few terms I should clarify. Models are more of an abstract term. In a general context, models refer to the parts of a program that structure and store data in a database. Models outline the design and functionality of the data but aren't data themselves. Each bit of data is a record, and each record follows the rules set up by the model. Imagine a model as the blueprint for making a basic building. Each actual building you build based on that blueprint is a record. Each building you make will be different in some way, like the people and businesses inside of it. But each one follows the same rules from the blueprint, such as how the foundation is set up and the approaches for adding and removing floors. Ruby on Rails is an application framework with setup for everything already in mind, including their models. So the specific rules around Rails models won't always apply to models you see in other frameworks or software. Most of the content in this article is specific to Rails models, although the principles behind them may carry over to other model setups. The main way Rails carries out models is using a gem called ActiveRecord. All model files build off the code already built into this gem. It has many built-in rules based on Rails conventions that developers must follow, for things like names and database fields. But following these conventions makes it easier to do what's often needed, like associations and validations. So it's important to know Rails' emphasis on convention over configuration. Lastly, models need help from another part of the program to add data to databases. Migrations are what set up the database to properly store the expected data. Whenever you add or change a model, you need migrations to prepare the database for that new or changed data. It's like carving a circle-shaped hole in a plank before you can start putting a circle-shaped block inside it. Explaining migrations is a whole other blog post, but the Rails guides explain them well. But know migrations and models go hand-in-hand when you start writing them yourself. It's also important to know what exactly goes into a well-written model. The Logic a Model Holds If you search for articles on good Rails application architecture, you'll likely find the "fat model, skinny controller" rule. It argues virtually all logic not related to network or router responses, but still related to that model's data, should be in the model. Let's say our library app had a page to show a single book. Part of that page could show other books by the same author. To make this section, we need to query the database for books by the same author. This logic could technically go in either the controller or the model. When I first started with Rails, I put lots of logic like this in the controller. But this isn't what a controller is supposed to do. Controllers find and update data based on user inputs. Finding books with the same author has nothing to do with user input, since it's based on data records. So it should be moved to the model instead. Later in this article, I'll give an example of a model method like this. Just remember that any model logic should directly relate to the data. Some functionality is only indirectly related to it, like sending notifications when a book is ready. Models will call functions like this when the data requires it, but the code for the notification itself is elsewhere. That's all the context taken care of. Let's start writing an actual model! Let's Start a Simple Model If we were writing a model for books in our supposed library app, it'd start like this. # app/models/book.rb class Book < ApplicationRecord end This is the simplest possible starting point. We can see some of the terms from before already at work. - The file is in the modelsdirectory. Knowing what models are based on the Model-View-Controller pattern, we know right away these files are about managing data records. - The file is named book.rb, but the class name is Bookwhich is capitalized. This follows a basic Ruby and Ruby on Rails naming convention - the class name is the file name but capitalized and camel case. - The Bookclass inherits from ApplicationRecordsubclass, which is from the ActiveRecordgem. So we know all the rules and functionality being pulled in to define our models. The Model's Database Schema For me, one of the most important yet easy to overlook parts of a model is the database schema. These are the actual values being stored in the database, making them vital to using your model correctly. However, as of this writing, creating a model in rails doesn't automatically document the database schema. Our Book model so far works and can use its database values, but they're not documented anywhere. That means anyone else reading our code, or ourselves when we inevitably forget, will have a real hard time figuring it out. So before adding anything else to our model, I recommend adding some comments at the top with the database schema. Or even better, use a gem like annotate_models to generate one automatically from your database migration. The result would give you something like this. # app/models/book.rb # == Schema Info # # Table name: books # # id :integer not null, primary key # created_at :datetime not null # updated_at :datetime not null # title :string not null # isbn :integer not null # available :boolean default(TRUE), not null # print_version :boolean default(TRUE), not null # ebook_version :boolean default(FALSE), not null # shelf_position :string # class Book < ApplicationRecord end Now we can see the data given to each model by the database, as well as any validations or defaults built into the schema. For example, we could call something like book.ebook_version and know it will have a value that defaults to false for new entries. Having these values is great, but the data being stored is on the simpler side since it's limited to strings and booleans and the like. Now we can start defining the more complex logic. Associations Associations are a big part of what lets all the different models work together. In this library app, they're how users can check out different books and have late fees. Or how different libraries can have different books. Associations are how we can make data easy to navigate for ourselves and users. Let's look at an example association in our Book model. Let's say we wanted to add a model for authors, and we needed to create a relationship between authors and their books. We know each book only has one author, but each author has potentially many books. So each model needs to define their relationship to the other. Rails make this easy to do on both sides, and in one line each. class Book < ApplicationRecord belongs_to :author end class Author < ApplicationRecord has_many :books end It's that easy. Now for each record, we can run book.author to see the book's author, and author.books for an array of all their books. The records are connected but updated separately. So if you change an author, you'll still see that change in the data when viewing it through its book record. Let's look at another relationship. Our library could have different custom library categories for books such as "featured," "archived," "mature," and others. Each book could potentially have many custom categories at once, and each custom category will have many books. So we'd have has_many being used on both sides. Calling each association on either type of record would give us an array of the other records. class Book < ApplicationRecord belongs_to :author has_many :custom_categories end class CustomCategory < ApplicationRecord has_many :books end Associations can get more complex along with the data. You can define associations through other models. Some may need more database schemas to link them together. There are also polymorphic associations, which I still don't quite understand myself. But the Rails guides explain the details of associations better too. Model Methods If you have a basic understanding of Ruby, you'll have noticed each model is still a class. Classes are built on their methods, yet so far our book model has none. So let's add some! Model methods work off data in the database. If there's any common logic that only need to pull and organize some related data, it should go in the model. Most models will have plenty of these, especially if it's being moved to the model from controllers or views. Methods to Get More Data We may need to know if each book is classified as "new" if it was bought in the last month. This can be added as a method called is_new? with some built-in Rails magic. def is_new? created_by > Date.now - 1.month end Now any book record can call book.is_new?. It will check if that book record's created_by date falls within the last month. I ended the method name with a question mark to show it returns a simple boolean without changing anything. The example of finding books with the same author is perfect for another method. We can query all the books in the database that have the same author while excluding our own. def by_same_author Books .where(author: author) .where.not(id: id) end Methods to Trigger Other Services Suppose we had an entirely separate service that alerts users when their book is available. The notification code should be somewhere else, but we can reference it in a method here. def tell_user_is_available(user) UserNotification(user, "#{book.title} is available to check out!") end This method references code in another folder, like app/notifications/user_notification.rb. I don't know or care how it would alert the user, and neither does the model. It simple passes in the needed info and that class does the work. But our book records can now alert users on their own without coupling the code too tightly. Methods not linked to Specific Records You may have seen all these examples are for specific book records. You need to know exactly what book you're talking about before seeing what else the author has written. But what if we wanted info related to all the books, not just specific ones? That's where class methods come in handy. Say we wanted a method to give us all the available books. The query itself is pretty simple, but we'd need to write it slightly differently. def self.available Book.where(available: true) end Adding self identifies this as a class method, meaning it can be called on the Book class and not just specific book records. So we can call Book.available without needing to get a specific record first. Scopes That last method for finding available books works, but can be fine-tuned. Rails already has a tool for limiting queries in a cleaner way less prone to bugs. That tool is, fittingly enough, scope. Any time you run a where query in a class method, it's a good idea to use scope instead. Adding this scope to our Book class is as simple as this: scope :available -> { where(available: true) } The result is the same and lets us call Book.available to find available books. But we can get fancier with it too. What if each book had an array of related genres, and we wanted available books with at least one genre in common? has_many :genres scope :available_related_books -> (genres) { where(available: true) .select { |book| common_genres = book.genres & genres common_genres.length > 0 } } After taking a group of genre objects as an argument, it only picks available books that have at least one genre in common. We'd pass the needed argument when calling it, like with Book.available_related_books([array, of, genres]). But you'll notice we're duplicating code from the available scope. They both even have the word "available" in their names, which is a red flag. A rule of thumb with any code is each method or function doing one thing and doing it well. So let's split these scopes apart. scope :available -> { where(available: true) } scope :same_genre -> (genres) { joins(:genres).where(genres: {id: genres.ids}) } Now we can use these separately or together if we want to. To get all available books of the same genre, we would use Book.same_genre([array, of, genres]).available. For some extra fun, let's get fancier. Say each book's web page has an "also consider checking out" section. It has a list of five random, available books similar to the featured one. We can use these scopes in a method to give us exactly what we need. scope :related_items -> (id, num) { where.not(id) .available .order("random()") .limit(num) } end This method makes use of our custom methods, our scopes, and some basic Ruby to keep all this logic readable, separated, and in a convenient place. Now we can call Book.related_items(id, 5) for a random list of five related books. Validations A final but no less important part of models are validations. They came up when defining database schemas, such as making sure important fields aren't empty. But this only works for simpler validations and often won't be enough. If our library app allowed books with duplicated ISBNs or no genres, things would fall apart fast. That's why good validation keeping out bad data is essential anywhere. So ActiveRecord makes it easy to add simple or complex validations. Validations can get quite complex depending on the data, and you could use a method like validates_with to separate your validations logic into another class. I'm going to stick with simpler ones here. Let's go back to our two examples, as ActiveRecord has some built-in validation helpers for these cases. For our ISBNs, we can use the uniqueness helper. validates :isbn, uniqueness: true For ensuring our books have good genres, we can check that each one has genres passed in when they're created. We can also run an extra validation check on each associated genre object to be safe. has_many :genres, presence: true validates_associated :genres Let's say we try to make a book record that uses a duplicated ISBN. If we use book = Book.create with that invalid data, three things will happen: - The data won't be persisted into the database, sparing our app future headaches. - We could call book.valid?and it would return false, letting us confirm it's not a valid book. - We could see the specific error messages with book.errors.messages. In this case, we'd get something like {isbn:["must be unique"]}. These objects can tell us, and the user, what went wrong so we can fix it. You'll often see messages from this method appear over a form in red after it fails to submit. There are many more helpers and nuances with validations, but that's too much for this post. And to be honest, I still don't know all the details myself. Again, the Ruby on Rails guides are best for taking a deeper validations dive. Wrapping Up Experienced developers will likely see other parts of models I excluded and maybe shouldn't have, like callbacks or accessors. Those are fair points, but I chose these topics based on my own experiences of what's more essential to understanding models That and this article was already long enough. I highly recommend the Rails guides for even more details on what models can do in each of these areas. Because believe me, my examples only scratched the surface. If this article helped you understand the essence of a model's role and functionality, then I encourage you to go forth and multiply your knowledge of them! Go forth and write Rails for the world, my subjects! Cover Image courtesy of SafeBooru.org Posted on by: Max Antonucci Journalist turned full-time coder, part-time ponderer. Read Next Ever wondered what happens when you type in a URL in an address bar in a browser? Wassim Chegham - Why I Stopped Using Redux Gabriel Abud - Minimizing Keystrokes, Maximizing Productivity - Bash Scripting Akshansh Bhanjana - Discussion Hi Max, I have some tips for you. In your by_same_author example, you run reject against a collection of results. This is not very efficient because you're running a query and then enumerating through would could be a large collection. Instead, you could have Postgres do it all for you: You also pluralized Book to Books, which I suspect isn't what you want. Your add/remove ebook status example is technically correct - it runs and does what it says - but is not idiomatic Ruby in the sense that in the wild, we don't really create methods to update the function and mark them with bangs, which connotes "danger". If we did, we'd end up with extremely bloated model classes. Instead, it's far more common to just set attributes until you're ready to commit by calling save and handle any validation errors. By wrapping individual purpose-defined updates in methods, you're actually just creating a maintenance headache and making it harder to see validation errors or wrap these updates in a transaction. Even if validations and transactions don't matter to you, consider that there's an even-shorter mechanism to do what you're doing: There are lots of good reasons to use class methods in a model, but running a where operation is actually not one of them! I know that you followed up this example with a scoped version of the same, but I would argue that you shouldn't even suggest the class method version of this where, because it is so much less useful and more likely to cause bugs in the future. Your same_genre scope is also taking a result from Postgres and running it through a Ruby enumerator when you could just get Postgres to run it all as a single query, and it will be much faster, cleaner and more composable in the future. In your example, you could do: and instead of calling Ruby sample: Finally, you mentioned keeping validations in a separate class, and I just wanted to say that in practice this is not super common because if your validations are that sophisticated, you might have other refactoring to do, first. At any rate, you might want to check out my Optimism gem for a powerful way to display validation errors in your forms via websockets. Thank you feedback! I did some edits to the post that worked in the points you made. Right on. :) I guess my question is... a few of the things I suggested were all fixing the same fundamental concern, which is running a SQL query (efficient!) and then taking the result and iterating over it in Ruby (not efficient!). Do you feel confident that you've internalized how to properly kick ass moving forward? I like this idea of Rails being made of Mega Man characters. Either that or Pokemon. But I won't be too picky. This is awesome, thanks!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/maxwell_dev/the-rails-model-introduction-i-wish-i-had-5h2d
CC-MAIN-2020-34
refinedweb
3,804
64.71
. Installing To install, run: pub global activate grinder Getting Started Your grinder build file should reside at tool/grind.dart. You can use grinder to create a simple, starting build script. To do this, run: pub global run grinder:init This will create a starting script in tool/grind.dart. In general, your build script will look something like this: import 'package:grinder/grinder.dart'; main(args) => grind(args); @Task('Initialize stuff.') init() { log("Initializing stuff..."); } @Task('Compile stuff.') @Depends(init) compile() { log("Compiling stuff..."); } @DefaultTask('Deploy stuff.') @Depends(compile) deploy() { log("Deploying stuff..."); } Tasks to run are specified on the command line. If a task has dependencies, those dependent tasks are run before the specified task. Specifying no tasks on the command-line will run the default task if one is configured. Command-line usage usage: grind <options> target1 target2 ... valid options: -h, --help show targets but don't build -d, --deps display the dependencies of targets or: pub global run grind <args> will run the tool/grind.dart script with the supplied arguments. API documentation Documentation is available here. Disclaimer This is not an official Google product. Libraries - grinder A task based, dependency aware build system. - grinder.files General file system routines, useful in the context of running builds. This includes the FileSet class, which is used for reasoning about sets of files. - grinder.tools Commonly used tools for build scripts, including for tasks like running the pubcommands.
https://www.dartdocs.org/documentation/grinder/0.7.0-dev.2/index.html
CC-MAIN-2017-09
refinedweb
240
60.51
Are you sure? This action might not be possible to undo. Are you sure you want to continue? version: 29 April 2004 Copyright 2004 Greg L 1 of 138 1.8 Acronyms and Terms...................................9 2 Storage.................................................11 2.1 Scalars.............................................11 2.1.1 Scalar Strings..................................12 2.1.1.1 String Literals.............................12 2.1.1.2 Single quotes versus Double quotes..........12.........................................14 2.1.2 Scalar Numbers..................................3.1 Stringify...................................18 2.1.3.1.1 sprintf.................................19 2.1.3.2 Numify......................................19 2.1.3.2.1 oct.....................................19 2.1.3.2.2 hex.....................................20 2.1.4 Undefined and Uninitialized Scalars.............20 2.1.5 Booleans........................................21 2.1.5.1 FALSE.......................................22 2.1.5.2 TRUE........................................22 2.1.5.3 Comparators.................................23 2.1.5.4 Logical Operators...........................23 2.1.5.4.1 Default Values..........................24 2.1.5.4.2 Flow Control............................25 2.1.5.4.3 Precedence..............................25 2.1.5.4.4 Assignment Precedence...................25 2 of 138 2.1.5.4.5 Flow Control Precedence.................25 2.1.6 References......................................26 2.1.7 Filehandles.....................................27 2.1.8 Scalar Review...................................27 2.2 Arrays..............................................27 2.2.1 scalar (@array) ................................29 2.2.2 push(@array, LIST)..............................29 2.2.3 pop(@array).....................................30 2.2.4 shift(@array)...................................30 2.2.5 unshift( @array, LIST)..........................30 2.2.6 foreach (@array)................................31 2.2.7 sort(@array)....................................32 2.2.8 reverse(@array).................................33 2.2.9 splice(@array)..................................33 2.2.10 Undefined and Uninitialized Arrays.............33 2.3 Hashes..............................................34 2.3.1 exists ( $hash{$key} )..........................35 2.3.2 delete ( $hash{key} )...........................36 2.3.3 keys( %hash )...................................37 2.3.4 values( %hash ).................................38 2.3.5 each( %hash )...................................38 2.4 List Context........................................43 2.5 References..........................................45 2.5.1 Named Referents.................................46 2.5.2 References to Named Referents...................46 2.5.3 Dereferencing.................................. Control Flow............................................53 3.1 Labels..............................................54 3.2 last LABEL;.........................................55 3.3 next LABEL;.........................................55 3.4 redo LABEL;.........................................55 4 Packages and Namespaces and Lexical Scoping ............55 4.1 Package Declaration.................................55 4.2 Declaring Package Variables With our................56 4.3 Package Variables inside a Lexical Scope............57 4.4 Lexical Scope.......................................58 4.5 Lexical Variables...................................58 4.6 Garbage Collection..................................60 4.6.1 Reference Count Garbage Collection..............60 4.6.2 Garbage Collection and Subroutines..............60 4.7 Package Variables Revisited.........................61 4.8 Calling local() on Package Variables................62 5 Subroutines.............................................64 5.1 Subroutine Sigil....................................64 5.2 Named Subroutines...................................64 5.3 Anonymous Subroutines...............................65 5.4 Data::Dumper and subroutines........................65 3 of 138 ........................................92 15...............1 Modules............................5 Inheritance.........71 6 Compiling and Interpreting..............................................88 13.....................3 Plain Old Documentation (POD) and perldoc............................76 9.13 Using wantarray to Create Context Sensitive Subroutines.........................4 INVOCANT->can(METHODNAME)..............5 Passing Arguments to/from a Subroutine...................................................68 5...................9 Subroutine Return Value.....75 9.68 5..79 11.72 8 The use Statement.......97 17.......95 15..............................92 15 CPAN........................74 9.96 17................8 Implied Arguments.......1 Class....................................................................66 5.......................................2 close.........2 use Module.......75 9...........................90 14 Object Oriented Review.66 5..........81 11...................................73 9 The use Statement............7 Dereferencing Code References.74 9.....................85 13 Object Oriented Perl...................99 18 File Input and Output.............................................. The Perl Module.................11 Using the caller() Function in Subroutines...4 Creating Modules for CPAN with h2xs............83 11..........................93 15.................91 14...........6 The use Execution Timeline.12 The caller() function and $wantarray..........92 14..............................................3 The PERL5LIB and PERLLIB Environment Variables........6 Overriding Methods and SUPER..............70 5......82 11........10 Returning False......................5........................................1 CPAN.........102 18.................85 13................................................93 15................4 The require Statement.............91 14.............................................................4 Methods..... Formally.102 18.........87 13...............................1 @ARGV..................5 MODULENAME -> import (LISTOFARGS)..............95 16 The Next Level...90 14..............67 5.5 Interesting Invocants...2 CPAN...................................69 5...............1 open............................................................... The Web Site......75 9.......................................4 Object Destruction...........6 Accessing Arguments inside Subroutines via @_...........87 13.................3 bless / constructors............77 11 Method Calls...... Perl Modules............................3 SUPER.........................................83 11........2 The use lib Statement.........71 7 Code Reuse..................102 18.........2 Polymorphism.......77 10 bless().3 INVOCANT->isa(BASEPACKAGE).................................................1 The @INC Array.....90 14.........................................84 12 Procedural Perl.....3 read....................91 14..................................1 Inheritance.....2 Getopt::Declare....95 17 Command Line Arguments....2 use base..............67 5.............102 4 of 138 ........................... ..16 Common Patterns................1 Global Modifiers...........11.......................................................1 The system() function.4 write...121 20...................4 Metacharacters.121 20............................7 Shortcut Character Classes..120 20........................115 20.......122 22 Perl.....15 The qr{} function............104 18..........2 The m And s Modifiers.......................109 20...........114 20...........................5 File Tests..113 20................121 20.........18..................................109 20........ GUI.....................118 20.......................17 Regexp::Common...............7 File Tree Searching.................11 Modifiers..9 Thrifty (Minimal) Quantifiers............103 18........2 The Backtick Operator...................................3 The x Modifier.3 Operating System Commands in a GUI......121 20..............10 Position Assertions / Position Anchors....14 Modifiers for tr{}{} Operator.........1 The \b Anchor..........................................106 20 Regular Expressions..5 Capturing and Clustering Parenthesis.....................121 20.116 20........2 Wildcard Example........117 20........110 20..1 Metacharacters Within Character Classes...................125 23 GNU Free Documentation License....106 20..13 Modifiers for s{}{} Operator.................................12 Modifiers For m{} Operator.............................................1 Variable Interpolation..............11...........2 The \G Anchor......... and Tk.............................127 5 of 138 ...................6 Character Classes.10............116 20.......111 20.....122 21 Parsing with Parse::RecDescent................118 20.........6.................................................108 20.....105 19.....................113 20.114 20.......104 18..3 Defining a Pattern....11......................113 20...10....8 Greedy (Maximal) Quantifiers...........................104 19 Operating System Commands............106 19..............105 19.................6 File Globbing..................... 1 The Impatient Introduction to Perl This document is for people who either want to learn perl or are already programming in perl and just do not have the patience to scrounge for information to learn and use perl. so he invented Perl. This document should also find use as a handy desk reference for some of the more common perl related questions. This sentence is part of a text section. which represents code to type into a script. And he did not like any of the scripting languages that were around at the time.2 Basic Formatting for this Document This document is formatted into text sections.8. 1. and is executed via a shell command. If the code section covers multiple files. the code is contained in one file. ###filename:MyFile.0 and is a highly recommended upgrade. and probably will not be available for some time. This is a code section. You will need to use a TEXT EDITOR.pl This code will be placed in a file called myscript. each file will be labeled.pm This code will be placed in a file called MyFile. and shell sections. Larry Wall was working as a sys-admin and found that he needed to do a number of common. not a WORD PROCESSOR to create these files.pl The first line of myscript. They also use a monospaced font. Perl 6 is on the drawing board as a fundamental rewrite of the language.pm #!/usr/local/bin/perl ###filename:myscript. code sections. Version 1 was released circa 1987.pl will be the line with #!/usr/local/bin/perl 6 of 138 .1 The history of perl in 100 words or less In the mid 1980s. It is not available yet. A few changes have occurred between then and now. The current version of Perl has exceeded 5. Text sections contain descriptive text. Generally. yet oddball functions over and over again. Code sections are indented. Text sections will extend to the far left margin and will use a non-monospaced font. 1. 1.org CPAN is an acronym for Comprehensive Perl Archive Network. as was as a TON of perl modules for your use. shell sections differ from code sections in that shell sections start with a '>' character which represents a shell prompt.cpan. the code is important. In this example. The name of the file is not important. immediately followed by the output from running the script. Even if you are not a beginner. get your sys-admin to install perl for you. In simple examples.4 Your First Perl Script. As an example. shell sections show commands to type on the command line. the code is shown in a code section.> > > > > > > > > > > > > shell sections are indented like code sections shell sections also use monospaced fonts. If you are a beginner. if any exists. > Hello World THIS DOCUMENT REFERS TO (LI/U)NIX PERL ONLY. 1. print "Hello World\n". the code for a simple "Hello World" script is shown here. EVER Find out where perl is installed: > which perl /usr/local/bin/perl 7 of 138 . It can be typed into a file of any name. get your sys-admin to install perl for you. Much of this will translate to Mac Perl and Windows Perl.004. and the output is important. The CPAN site contains the latest perl for download and installation. so they are they only things shown.3 Do You Have Perl Installed To find out if you have perl installed and its version: > perl -v You should have at least version 5. shell sections also show the output of a script. The command to run the script is dropped to save space. If you have an older version or if you have no perl installed at all. you can download it for free from. but the exact translation will be left as an exercise to the reader. The command to execute the script is not important either. Anything from a # character to the end of the line is a comment.pl extension is simply a standard accepted extension for perl scripts." is not in your PATH variable. 1.5 Default Script Header All the code examples are assumed to have the following script header. use strict.pl Hello World If ". You can save yourself a little typing if you make the file executable: > chmod +x hello.pl And then run the script directly. > hello. print "Hello World \n". 8 of 138 . use Data::Dumper. Type in the following: #!/usr/bin/env perl use warnings.) Run the script: > perl hello. use strict. unless otherwise stated: #!/usr/local/bin/perl use warnings.pl using your favorite text editor. EVERY perl script should have this line: use warnings. # comment use Data::Dumper. use Data::Dumper.pl HOORAY! Now go update your resume. use strict. (The #! on the first line is sometimes pronounced "shebang") (The .Create a file called hello. you will have to run the script by typing: > ./hello.pl Hello World This calls perl and passes it the name of the script to execute. use strict.org 1. you can control which version of perl they will run by changing your PATH variable without having to change your script. "Pearl" shortened to "Perl" to gain status as a 4-letter word. I will slap it on the front cover and give this document a picture name as well. It is sometimes referred to as the "Camel Book" by way of the camel drawing on its cover. Anyways. and each one has a different animal on its cover. > > > > perl -h perldoc perldoc -h perldoc perldoc FAQs on CPAN:. Highly recommended book to have handy at all times. unless its the "Dragon Book". has printed enough computer books to choke a. If you need to have different versions of perl installed on your system.com Still more free documentation on the web: Cheap Reference Material "Programming Perl" by Larry Wall. when I get a graphic that I like. 9 of 138 .org for more. as well as Petty Ecclectic Rubbish Lister. Well. O'Reilly. 1. See. and Jon Orwant. camel.org/cpan-faq. use Data::Dumper. well. #!/usr/bin/env perl use warnings.perldoc. The name was invented first.cpan.cpan. It uses your PATH environment variable to determine which perl executable to run.6 Free Reference Material You can get quick help from the standard perl installation. and Ullman.html Mailing Lists on CPAN: More free documentation on the web:. it is probably an O'Reilly book.perl. because that refers to a book called "Compilers" by Aho. Sethi.Optionally. CPAN: Comprehensive Perl Archive Network. Therefore if you hear reference to some animal book. Now considered an acronym for Practical Extraction and Report Language.8 Acronyms and Terms PERL: Originally.cpan. 1. Tom Christiansen. The acronyms followed. The publisher. you may try the header shown below. This has nothing to do with perl either. autovivification wins. "vivify" meaning "alive". Well. Generally applies to perl variables that can grant themselves into being without an explicit declaration from the programmer. especially if the programmer wishes to hide the fact that he did not understand his code well enough to come up with better names. "Autovivify" is a verb. the standard mantra for computer inflexibility was this: "I really hate this darn machine.DWIM: Do What I Mean. a Haiku: Do What I Mean and Autovivification sometimes unwanted TMTOWTDI: There is More Than One Way To Do It. This has nothing to do with perl. Part of perl's DWIM-ness. alright. and perl is often awash in variables named "foo" and "bar"." DWIM-iness is an attempt to embed perl with telepathic powers such that it can understand what you wanted to write in your code even though you forgot to actually type it. it gives you all the tools and lets you choose. except that "foo" is a common variable name used in perl. but only what I tell it. If you use a $foo variable in your code. and for some reason. (They had to write perl in C. Sometimes. Once upon a time. DWIM is just a way of saying the language was designed by some really lazy programmers so that you could be even lazier than they were. And now. To bring oneself to life. you deserve to maintain it. An acronym for Fouled Up Beyond All Recognition and similar interpretations. This allows a programmer to select the tool that will let him get his job done. except that fubar somehow got mangled into foobar. Later became known as a UFO. when "do what I mean" meets autovivification in perl. Fubar: Another WWII phrase used to indicate that a mission had gone seriously awry or that a piece of equipment was inoperative. 10 of 138 . An acknowledgement that any programming problem has more than one solution. Foo Fighters: A phrase used around the time of WWII by radar operators to describe a signal that could not be explained. The noun form is "autovivification". It never does what I want. so they could not be TOO lazy. autovivification is not what you meant your code to do. it gives a perl newbie just enough rope to hang himself. Rather than have perl decide which solution is best. Sometimes. I wish that they would sell it.) AUTOVIVIFY: "auto" meaning "self". In most common situations. $name = 'John Doe'. A "$" is a stylized "S". Numbers (integers and floats). Autovivify : to bring oneself to life. The most basic storage type is a Scalar. Perl is smart enough to know which type you are putting into a scalar and handle it. in certain situations. # therefore $circumference is zero. use strict. my my my my $pi = 3. Autovivified to zero. Arrays and Hashes use Scalars to build more complex data types.2 Storage Perl has three basic storage types: Scalars. Without use warnings. my $diameter = 42. $pie doesn't exist. initialize it to zero.1415. autovivification can be an unholy monster. Scalars can store Strings. References. However.1 Scalars Scalars are preceded with a dollar sign sigil. $ref_to_name = \$name Without "use strict. autovivication is handy. There is no reason that warnings and strictness should not be turned on in your scripts. 2. # oops. my $circumference = $pie * $diameter. Arrays. and assume that is what you meant to do. using a variable causes perl to create one and initialize it to "" or 0." and without declaring a variable with a "my". and Filehandles. 11 of 138 . This is called autovivication. Hashes. $initial = 'g'. perl will autovivify a new variable called "pie". > first is 'John' > last is 'Doe' 2. such as a hash lookup cannot be put in double quoted strings and get interpolated properly. print hello. Error: Unquoted string "hello" may clash with reserved word You can use single quotes or double quotes to set off a string literal: my $name = 'mud'. print 'hello $name'. Complex variables. my $greeting = "hello. my $name = 'mud'.1. 2. print $greeting. print "hello $name \n".2 Single quotes versus Double quotes Single quoted strings are a "what you see is what you get" kind of thing. perl just handles it for you automatically.1.1 String Literals String literals must be in single or double quotes or you will get an error. $name\n". my ($first. my $name = 'mud'.2.1 Scalar Strings Scalars can store strings. print "first is '$first'\n".1. You do not have to declare the length of the string. 12 of 138 .1. > hello. > hello $name Double quotes means that you get SOME variable interpolation during string evaluation. mud You can also create a list of string literals using the qw() function.$last)=qw( John Doe ). print "last is '$last'\n".1. > hello mud Note: a double-quoted "\n" is a new-line character. chomp($string)..1. > string is 'the rain beyond spain' . rather than using a constant literal in the example above. 2.1. 9.4 concatenation String concatenation uses the period character ". my $len = length($line). 2). You can quickly get a chunk of LENGTH characters starting at OFFSET from the beginning or end of the string (negative offsets go from the end). If there are no newlines.1. replacing the chunk as well.2..1. my $chunk = substr('the rain in spain'.7 substr substr ( STRING_EXPRESSION. You need a string contained in a variable that can be modified. fold..1.1. 2) = 'beyond'. > chunk is 'in' . The substr function then returns the chunk. my $string = 'the rain in spain'.1. warn "string is '$string' \n" > string is 'hello world' . # $len is 80 # $line is eighty hypens 13 of 138 . and mutilate strings using substr(). My $string = "hello world\n". The substr function can also be assigned to. 2. warn "chunk is '$chunk'". Spin.1. substr($string. OFFSET. LENGTH). chomp leaves the string alone. The return value of chomp is what was chomped (seldom used)." my $fullname = 'mud' .1..3 chomp You may get rid of a newline character at the end of a string by chomp-ing the string.. The chomp function removes one new line from the end of the string even if there are multiple newlines at the end. 2.1. The substr function gives you fast access to get and modify chunks of a string. 9. 2.5 repetition Repeat a string with the "x" operator. "bath". my $line = '-' x 80.6 length Find out how many characters are in a string with length(). warn "string is '$string'".. 9 join join('SEPARATOR STRING'.8 split split(/PATTERN/. ... $tab_sep_data).).1. 2.$age) = split(/\t/. 'bananas'. 2. Use the split function to break a string expression into components when the components are separated by a common substring pattern.10 qw The qw() function takes a list of barewords and quotes them for you. my $tab_sep_data = "John\tDoe\tmale\t42".. qw(apples bananas peaches)).. my $string = join(" and ". warn "string is '$string'". Use join to stitch a list of strings into a single string. The /PATTERN/ in split() is a Regular Expression.. > string is 'apples and bananas and peaches'.$gender. which is complicated enough to get its own chapter. 'peaches'). For example. You can break a string into individual characters by calling split with an empty string pattern "". my ($first.1. tab separated data in a single string can be split into separate strings. STRING1. > string is 'apples and bananas and peaches'.2. warn "string is '$string'". STRING2. STRING_EXPRESSION.1. 'apples'.$last.1.1.1. 14 of 138 . my $string = join(" and ".LIMIT).. 2. and hexadecimal.6.9). $price). Truncating means that positive numbers always get smaller and negative numbers always get bigger.9) If you want to round a float to the nearest integer.9.1. which means you do NOT get rounding. you simply use it as a scalar. If you specify something that is obviously an integer. 01234567.4 # var2 is 5. as well as decimal. my $price = 9. you will need to write a bit of code. # dollars is 10 15 of 138 # var1 is 3.95. 27_000_000.1.1. 0b100101. my $dollars = int ($price).0f". Either way. One way to accomplish it is to use sprintf: my $price = 9. my $temperature = 98. Note that this truncates everything after the decimal point. it will use an integer. Binary numbers begin with "0b" hexadecimal numbers begin with "0x" Octal number begin with a "0" All other numeric literals are assumbed to be base 10.1.2.2.1.0. 0xfa94. including integer. # dollars is 9. and scientific notation. not 10! false advertising! my $y_pos = -5.2. octal.2 Numeric Functions 2.9 . my $y_int = int($y_pos).2. my my my my my $solar_temp_c $solar_temp_f $base_address $high_address $low_address = = = = = 1. # y_int is -5 (-5 is "bigger" than -5. # # # # # centigrade fahrenheit octal hexadecimal binary # scalar => integer # scalar => float 2. floating point. 2.2 Scalar Numbers Perl generally uses floats internally to store numbers.5e7.3 abs Use abs to get the absolute value of a number.4).4 int Use "int" to convert a floating point number to an integer.1 Numeric Literals Perl allows several different formats for numeric literals. my $dollars = sprintf("%. my $var2 = abs(5.95. my $var1 = abs(-3. my $days_in_week = 7.2. 7 sqrt Use sqrt to take the square root of a positive number. Use the Math::Complex module on CPAN. If you have a value in DEGREES. my $seven_squared my $five_cubed my $three_to_the_fourth Use fractional powers to take a root of a number: my $square_root_of_49 my $cube_root_of_125 my $fourth_root_of_81 = 49 ** (1/2).1. multiply it by (pi/180) first.5 trigonometry (sin.0905 16 of 138 .2. = 81 ** (1/4). my $square_root_of_123 = sqrt(123). 2.cos.2. then use the Math::Trig module on CPAN. #125 = 3 ** 4. and tangent of a value given in RADIANS.785 rad $sine_deg = sin($angle).1. cosine.707 If you need inverse sine. and tan functions return the sine.2.14 / 180 ). = 125 ** (1/3). # 11. # 81 Standard perl cannot handle imaginary numbers. 2.2. # 45 deg $radians = $angle * ( 3.707 $sine_rad = sin($radians).6 exponentiation Use the "**" operator to raise a number to some power. or tangent. cos. # .tan) The sin. # 49 = 5 ** 3. cosine. # 7 # 5 # 3 = 7 ** 2. # 0. # 0.1. my my my my $angle = 45. 7183 ** (1/1.004.718281828). you should not need to call it at all.7183 ** 42 = 1.10).7e18 my $big_num = $value_of_e ** 42. my $big_num= exp(42).2.1 Note that inverse natural logs can be done with exponentiation.7e18) = 42 my $inv_exp = $value_of_e ** (1/$big_num).7183 ** 42 = 1. then use this subroutine: sub log_x_base_b {return log($_[0])/log($_[1]). log returns the number to which you would have to raise e to get the value passed in.e. # answer = 4. my $value_of_e = exp(1). 1/value) with exponentiation. If you have a version of perl greater than or equal to 5. If no value is passed in.7183 # 2. rand returns a number in the range ( 0 <= return < 1 ) The srand function will seed the PRNG with the value passed in. srand will seed the PRNG with something from the system that will give it decent randomness.1. Natural logarithms simply use the inverse of the value (i. to what power do we need to raise the # number 10 to get the value of 12345? my $answer = log_x_base_b(12345. If a value is passed in. If you want another base.8 natural logarithms(exp. # 2. my $inv_exp = log($big_num). srand) The rand function is a pseudorandom number generator (PRNG).9 random numbers (rand. rand returns a number that satisfies ( 0 <= return <=input ) If no value is passed in. 2.log) The exp function returns e to the power of the value given. To get e. you just need to know the value of the magic number e (~ 2. call exp(1). # inv_exp = 2. because perl will call srand at startup.e. # inv_exp = 42 17 of 138 .2.} # want the log base 10 of 12345 # i. The exp function is straightforward exponentiation: # big_num = 2. which is to say.7e18 The log function returns the inverse exp() function.1. You can pass in a fixed value to guarantee the values returned by rand will always follow the same sequence (and therefore are predictable). You should only need to seed the PRNG once.2. 3' 18 of 138 . warn "mass is '$mass'\n".3. If you do not want the default format. Perl will attempt to convert the numbers into the appropriate string representation..3' . Perl will attempt to apply Do What I Mean to your code and just Do The Right Thing.2. my $mass = 7. warn "volume is '$volume'\n".1. > volumn is '4' .1.3 # '7. # 7.= ''. > mass is '7.. Perl is not one of these languages.3. the code did not have to explicitely convert these numbers to string format before printing them out. Perl will automatically convert a number (integer or floating point) to a string format before printing it out. my $volume = 4. my $string_mass = $mass . Even though $mass is stored internally as a floating point number and $volume is stored internally as an integer.. simply concatenate a null string onto the end of the value. use sprintf.3.1 Stringify Stringify: Converting something other than a string to a string form.3 Converting Between Strings and Numbers Many languages require the programmer to explicitely convert numbers to strings before printing them out and to convert strings to numbers before performing arithemetic on them. There are two basic conversions that can occur: stringification and numification.. 2. my $mass = 7. If you want to force stringification. 1 oct The oct function can take a string that fits the octal.1.$pi). or binary format and convert it to an integer. my $user_input = '19. If the string is NOT in base ten format.3.2. For example. You can force numification of a value by adding integer zero to it. binary formatted strings start with "0b" hexadecimal formatted strings start with "0x" # '19.2.1.1.95 "%lx" "%lo" "%lb" "%ld" "%f" "%e" The letter 'l' (L) indicates the input is an integer.95' # 19. my $price = $user_input+0.3.2 f => => => => => format fill leading spaces with zero total length. Sometimes you have string information that actually represents a number. sprintf ( FORMAT_STRING. then use oct() or hex() 2. or decimal formated string.1415. a user might enter the string "19.. LIST_OF_VALUES ). octal.2 Numify Numify: Converting something other than a number to a numeric form. including decimal point put two places after the decimal point floating point notation To convert a number to a hexadecimal.1.. warn "str is '$str'". binary.95" which must be converted to a float before perl can perform any arithemetic on it. possibly a Long integer. For example: my $pi = 3. hexadecimal. 19 of 138 . Decoding the above format string: % 0 6 . > str is '003.1 sprintf Use sprintf to control exactly how perl will convert a number into string format. use the following FORMAT_STRINGS: hexadecimal octal binary decimal integer decimal float scientific 2.3.2f".95'.14' . my $str = sprintf("%06. binary. 2.4 Undefined and Uninitialized Scalars All the examples above initialized the scalars to some known value before using them. and it does not require a "0x" prefix. The hex() function is like oct() except that hex() only handles hex base strings. Since perl automatically performs this conversion no matter what.1. oct will assume the string is octal.2 hex The hex() function takes a string in hex format and converts it to integer. This example uses regular expressions and a tertiary operator. To handle a string that could contain octal.2. Use the defined() function to test whether a scalar is defined or not.3.1. call oct on it. but a warning is emitted. in which case. there is no string or arithematic operation that will tell you if the scalar is undefined or not. Note: even though the string might not start with a zero (as required by octal literals). OR decimal strings. perl will stringify or numify it based on how you are using the variable. the conversion still takes place. you could assume that octal strings must start with "0". else assume its decimal. the variable is undefined. this conversion is silent. 20 of 138 . An undefined scalar stringifies to an empty string: "" An undefined scalar numifies to zero: 0 Without warnings or strict turned on.all other numbers are assumed to be octal strings. Which means calling oct() on a decimal number would be a bad thing. 2. If the scalar is defined. my $num = ($str=~m{^0}) ? oct($str) : $str + 0. hexadecimal. If you use a scalar that is undefined. if the string starts with zero. the function returns a boolean "true" (1) If the scalar is NOT defined. the function returns a boolean "false" (""). You can declare a variable but not initialize it. Then. which are explained later. With warnings/strict on. Instead. Any variable that is not a SCALAR is first evaluated in scalar context. An array with one undef value has a scalar() value of 1 and is therefore evaluated as TRUE. # undef as if never initialized print "test 3 :". # undef print "test 1 :". This will be exactly as if you declared the variable with no initial value. # defined print "test 2 :". if(defined($var)) {print "defined\n". The scalar context of an ARRAY is its size. if(defined($var)) {print "defined\n". any other string TRUE any other number is TRUE TRUE Note that these are SCALARS.If you have a scalar with a defined value in it.} else {print "undefined\n". if(defined($var)) {print "defined\n".} $var = undef. perl interprets scalar strings and numbers as "true" or "false" based on some rules: 1) or 2) 3) 4) Strings "" and "0" stringification is Number 0 is FALSE.} else {print "undefined\n".} $var = 42.5 Booleans Perl does not have a boolean "type" per se. and then treated as a string or number by the above rules. 21 of 138 .} > test 1 :undefined > test 2 :defined > test 3 :undefined 2. assign undef to it.} else {print "undefined\n". my $var.1. all references are undef is FALSE are FALSE. and you want to return it to its uninitialized state. string string string float integer string '0. which means the following scalars are considered TRUE.0 and get some seriously hard to find bugs.0" instead of a float 0.0' is TRUE.1 FALSE The following scalars are interpreted as FALSE: integer float string string undef 2. which is FALSE. even though you might not have expected them to be false.1415 11 'yowser' # # # # # # true true true true true true 0 0. # FALSE This is sufficiently troublesome to type for such a common thing that an empty return statement within a subroutine will do the same thing: return.5. make sure you explicitely NUMIFY it before testing its BOOLEANNESS.5.0 '0' '' # # # # # false false false false false #FALSE If you are doing a lot of work with numbers on a variable.1.1. you may wish to force numification on that variable ($var+0) before it gets boolean tested.0' '00' 'false' 3.0'+0) will get numified to 0.A subroutine returns a scalar or a list depending on the context in which it is called. If you are processing a number as a string and want to evaluate it as a BOOLEAN. 22 of 138 .2 TRUE ALL other values are interpreted as TRUE. but ('0. use this: return wantarray() ? () : 0. Built in Perl functions that return a boolean will return an integer one (1) for TRUE and an empty string ("") for FALSE. just in case you end up with a string "0. Note that the string '0. 2. To explicitely return FALSE in a subroutine. 4 Logical Operators Perl has two sets of operators to perform logical AND. This will occur silently. you will get their alphabetical string comparison. if warnings/strict is not on.1. perl will attempt to numify the strings and then compare them numerically. Number 9 is less than (<=) number 100. it will fail miserably and SILENTLY. NOT functions. 0. indicating the compared values are less than.3 Comparators Comparison operators return booleans. then you will get the wrong result when you compare the strings ("9" lt "100"). assigning the numification of "John" to integer zero. 2. Function equal to not equal to less than greater than less than or equal to greater than or equal to Comparison (<-1. The difference between the two is that one set has a higher precedence than the other set.5. String "9" is greater than (gt) string "100". OR. perl will emit no warning. Distinct comparison operators exist for comparing strings and for comparing numbers.1. Perl will stringify the numbers and then perform the compare. However.2. If you use a numeric operator to compare two strings.>1) String eq ne lt gt le ge cmp Numeric == != < > <= >= <=> Note that if you use a string compare to compare two numbers. or greater than. specifically an integer 1 for true and a null string "" for false. Comparing "John" <= "Jacob" will cause perl to convert "John" into a number and fail miserably. The "Comparison" operator ("<=>" and "cmp") return a -1. And if you wanted the numbers compared numerically but used string comparison. 23 of 138 . equal to. The numberic comparison operator '<=>' is sometimes called the "spaceship operator".5. or +1. ==0. and '!' operators. $right )=@_. '||'. But first.4. 'or'.0). and 'xor' operators. If the user calls the subroutine with missing arguments. $right ||= 2. 2.0 and 2. some examples of how to use them. 24 of 138 .0. $left ||= 1.5.0.0. is exactly the same as this: $left = $left || 1.1 Default Values This subroutine has two input parameters ($left and $right) with default values (1. This: $left ||= 1. function operator usage AND OR NOT && || ! return value $one && $two if ($one is false) $one else $two $one || $two if ($one is true) $one else $two ! $one if ($one is false) true else false The lower precedence logical operators are the 'and'.1.0. so it is useful to learn how precedence affects their behavior. # deal with $left and $right here. 'not'. sub mysub { my( $left. function operator usage AND OR NOT XOR and or not xor $one and $two $one or $two not $one $one xor $two return value if ($one is false) $one else $two if ($one is true) $one else $two if ($one is false) true else false if ( ($one true and $two false) or ($one false and $two true) ) then return true else false Both sets of operators are very common in perl code.The higher precedence logical operators are the '&&'. } The '||=' operator is a fancy shorthand. the undefined parameters will instead receive their default values. 2.5. The 'or' and 'and' operators have a precedence that is LOWER than an assignment. $filename) or die "cant open". but the end result is code that is actually quite readable.3 Precedence The reason we used '||' in the first example and 'or' in the second example is because the operators have different precedence. and we used the one with the precedence we needed. because they have lower precedence than functions and other statements that form the boolean inputs to the 'or' or 'and' operator. 2.5.1. which will ALWAYS assign $default to the first value and discard the second value. 2.1.1. # default is 0 The second example is equivalent to this: (my $default = 0) or 1. open (my $filehandle. use 'or' and 'and' operators. The '||' and '&&' have higher precedence than functions and may execute before the first function call.5. 25 of 138 . If open() fails in any way. and FALSE OR'ed with die () means that perl will evaluate the die() function to finish the logical evalutation.2 Flow Control The open() function here will attempt to open $filename for reading and attach $filehandle to it. because they have a higher precedence than (and are evaluated before) the assignement '='.5.4.4.4 Assignment Precedence When working with an assignment.4.1. It won't complete because execution will die.2. Right: my $default = 0 || 1. # default is 1 Wrong: my $default = 0 or 1. followed by any remaining 'and' and 'or' operators. it returns FALSE. use '||' and '&&'. meaning the assignment would occur first.5 Flow Control Precedence When using logical operators to perform flow control.4. > John . my $deref_name = $$ref_to_name. warn "age_ref is '$age_ref'". but it is probably better to get in the habit of using the right operator for the right job. you cannot manually alter the address of a perl reference.6 References A reference points to the variable to which it refers. Unlike C.. which will ALWAYS evaluate $fh as true. warn $deref_name. This tells you that $age_ref is a reference to a SCALAR (which we know is called $age). Create a reference by placing a "\" in front of the variable: my my my my $name = 'John'. the return value is discarded.E. The second example is equivalent to this: close ($fh || die "Error"). $age = 42. It also tells you the address of the variable to which we are referring is 0x812e6ec. NEVER die. Wrong: close $fh || die "Error: could not close". I.Right: close $fh or die "Error:could not close". It is kind of like a pointer in C. $name_ref = \$name.. my $name = 'John'. such as "SCALAR (0x83938949)" and have perl give you a reference to whatever is at that address. > age_ref is 'SCALAR(0x812e6ec)' . which says "the data I want is at this address". 26 of 138 . and the program continues on its merry way. You cannot referencify a string. Perl will stringify a reference so that you can print it and see what it is. You can only create a reference to variables that are visible in your source code. and close $fh. You can dereference a reference by putting an extra sigil (of the appropriate type) in front of the reference variable. It is always possible to override precedence with parentheses.. Perl is pretty loosy goosey about what it will let you do.. If close() fails. 2. but not even perl is so crazy as to give people complete access to the system memory. my $ref_to_name = \$name.1. $age_ref = \$age. you cannot give perl a string. float 0.References are interesting enough that they get their own section. There is some magic going on there that I have not explained. But I introduce them here so that I can introduce a really cool module that uses references: Data::Dumper. string "". File IO gets its own section.1. 2.txt".1. Given a scalar that is undefined (uninitialized). '>out. close($fh). and store the handle to that file in the scalar.0.7 Filehandles Scalars can store a filehandle. 2. 2. print $fh "hello world\n". string "0". print $fh "this is simple file writing\n". This does not seem very impressive with a reference to a scalar: my $name = 'John'. and FILEHANDLES. but I introduce it here to give a complete picture of what scalars can hold. Stringify: to convert something to a string format Numify: to convert something to a numeric format The following scalars are interpreted as boolean FALSE: (integer 0. but that is a quick intro to scalar filehandles. Printing to the filehandle actually outputs the string to the file. warn Dumper \$ref_to_name. But this will be absolutely essential when working with Arrays and Hashes. undef) All other scalar values are interpreted as boolean TRUE. REFERENCES.8 Scalar Review Scalars can store STRINGS. The scalar $fh in the example above holds the filehandle to "out. NUMBERS (floats and ints). my $ref_to_name = \$name. 27 of 138 . calling open() on that scalar and a string filename will tell perl to open the file specified by the string. The "@" is a stylized "a". Data::Dumper will take a reference to ANYTHING and print out the thing to which it refers in a human readable form.2 Arrays Arrays are preceded with an ampersand sigil. open(my $fh.txt'). > $VAR1 = \'John'. my @numbers = qw ( zero one two three ). Perl arrays are ONE-DIMENSIONAL ONLY. which is initialized to 1 Arrays can store ANYTHING that can be stored in a scalar my @junk_drawer = ( 'pliers'. 28 of 138 .4] are autovivified # and initialized to undef print Dumper \@months.An array stores a bunch of scalars that are accessed via an integer index. When you refer to an entire array.14.. my $string = $number[2]. $months[1]='January'. 9*11. > $VAR1 = [ > undef. '*'.1.. $months[5]='May'. # # # # # # index 0 is undefined $months[1] this is same as undef undef undef $months[5] If you want to see if you can blow your memory. > 'January'. # the array is filled with undefs # except the last entry. use the "@" sigil. $mem_hog[10000000000000000000000]=1. > ${\$VAR1->[0]}. (Do Not Panic. Perl autovivifies whatever space it needs. warn $string. '//'. 'yaba'. > ${\$VAR1->[0]}. 1. > two . my @months. The length of an array is not pre-declared. When you index into the array. > 'May' > ]. 'daba' ). # $months[0] and $months[2. try running this piece of code: my @mem_hog.) The first element of an array always starts at ZERO (0). the "@" character changes to a "$" my @numbers = qw ( zero one two three ). > ${\$VAR1->[0]}.1. 3.. my $quant = @phonetic. > 4 . you will get the same thing.. > $VAR1 = [ > 'milk'. > 'cheese' > ]. print Dumper \@groceries.. > 3 . use "scalar" my @phonetic = qw ( alpha bravo charlie delta ).1 scalar (@array) To get how many elements are in the array. LIST) Use push() to add elements onto the end of the array (the highest index).Negative indexes start from the end of the array and work backwards. my @phonetic = qw ( alpha bravo charlie ). > last is 'blue' .. my $quantity = scalar(@phonetic).2. but calling scalar() is much more clear.2 push(@array. 2.. This will increase the length of the array by the number of items added.2. > 'bacon'. my @groceries = qw ( milk bread ). > 'eggs'. 29 of 138 . push(@groceries. > 'bread'. qw ( eggs bacon cheese )).. warn $quant. warn $quantity. 2. This is explained later in the "list context" section.. my $last=$colors[-1]. my @colors = qw ( red green blue ). warn "last is '$last'". When you assign an entire array into a scalar variable. 2.2. at index zero). > $VAR1 = [ > 'alice'. > $VAR1 = [ > 'birch'. warn "popped = $last_name". my @names = qw ( alice bob charlie ). warn Dumper \@trees. > 'fum' > ]. All elements will be shifted DOWN one index. 2. > fee > $VAR1 = [ > 'fie'.. > 'pine'. warn Dumper \@curses.2.4 shift(@array) Use shift() to remove one element from the beginning/bottom of an array (i. > 'oak' > ]. 2. > popped = charlie . warn $start. my @trees = qw ( pine maple oak ). 'birch'). at index ZERO).5 unshift( @array. my $last_name = pop(@names).e. All the other elements in the array will be shifted up to make room. # # # # index 0 old index 0.e. my $start = shift(@curses). The array will be shorted by one. print Dumper \@names. LIST) use unshift() to add elements to the BEGINNING/BOTTOM of an array (i. The return value of pop() is the value popped off of the array. my @curses = qw ( fee fie foe fum ). This will length the array by the number of elements in LIST. > 'foe'. unshift(@trees.. > 'bob' > ]. This will shorten the array by one. The return value is the value removed from the array.2. > 'maple'. now 1 2 3 30 of 138 .3 pop(@array) Use pop() to get the last element off of the end of the array (the highest index). my @numbers = qw (zero one two three). The foreach structure supports last. > 9484. 142. which is still part of array. 83948 ). foreach my $num (@numbers) { shift(@numbers) if($num eq 'one'). Its formal definition is: LABEL foreach VAR (LIST) BLOCK This is a control flow structure that is covered in more detail in the "control flow" section. } > num is 'zero' > num is 'one' > num is 'three' # note: I deleted 'zero'. > 242. Changes to VAR propagate to changing the array.2. > $VAR1 = [ > 123. but I failed to # print out 'two'. } print Dumper \@integers. print "num is '$num'\n".2. next. } > > > > fruit fruit fruit fruit is is is is 'apples' 'oranges' 'lemons' 'pears' DO NOT ADD OR DELETE ELEMENTS TO AN ARRAY BEING PROCESSED IN A FOREACH LOOP. > 84048 > ]. foreach my $num (@integers) { $num+=100. 31 of 138 . 9384. # BAD!! VAR acts as an alias to the element of the array itself. my @integers = ( 23. Use a simple foreach loop to do something to each element in an array: my @fruits = qw ( apples oranges lemons pears ). and redo statements. foreach my $fruit (@fruits) { print "fruit is '$fruit'\n".6 foreach (@array) Use foreach to iterate through all the elements of a list. print Dumper \@sorted_array . > 'pears' > ]. > 'oranges'. which probably is not what you want. 76. > 200. $a and $b. > 76. my @sorted_array = sort {$a<=>$b} (@scores). 27. 200. 200. my @sorted_array = sort(@scores). > 150. > 'bananas'. >$VAR1 = [ > 'apples'. > 13.7 sort(@array) Use sort() to sort an array alphabetically. The return value is the sorted version of the array.2. my @scores = ( 1000. The array passed in is left untouched. my @sorted_array = sort(@fruit). my @fruit = qw ( pears apples bananas oranges ). > $VAR1 = [ > 1000. 13. 13. > 200. > 27. 76. > 27. 27. 32 of 138 . print Dumper \@sorted_array . # 1's # 1's # 1's The sort() function can also take a code block ( any piece of code between curly braces ) which defines how to perform the sort if given any two elements from the array. The code block uses two global variables.2. my @scores = ( 1000. Sorting a list of numbers will sort them alphabetically as well. 150 ). print Dumper \@sorted_array . > 76 > ]. 150 ). > 150. and defines how to compare the two entries. > 1000 > ]. This is how you would sort an array numerically. > $VAR1 = [ > 13. If you want to uninitialize an array that contains data. warn join(" ". > hello out there . 'out'). 2. # WRONG To clear an array to its original uninitialized state. splice(@words. This is equivalent to calling defined() on a scalar variable. my @array = (). OFFSET . # RIGHT 33 of 138 . my @numbers = reverse (1000.150).13. The first element becomes the last element. and leave you with a completely empty array. > 76. then the array is uninitialized. This would fill the array with one element at index zero with a value of undefined. > 13.10 Undefined and Uninitialized Arrays An array is initialized as having no entries.8 reverse(@array) The reverse() function takes a list and returns an array in reverse order. Therefore you can test to see if an array is initialized by calling scalar() on it.2. @words). > 200. > $VAR1 = [ > 150.200.76. print Dumper \@numbers . splice ( ARRAY . integer 0). my @array = undef. LIST ). then you do NOT want to assign it undef like you would a scalar. my @words = qw ( hello there ). Any elements from LIST will be inserted at OFFSET into ARRAY. The elements in ARRAY starting at OFFSET and going for LENGTH indexes will be removed from ARRAY. 0. This will clear out any entries. > 27.. LENGTH .e. If scalar() returns false (i.. 2. assign an empty list to it. The last element becomes the first element.2.9 splice(@array) Use splice() to add or remove elements into or out of any index range of an array.2.2. 1.27. > 1000 > ]. my %inventory.) There is no order to the elements in a hash. but you should not use a hash with an assumption about what order the data will come out. my %info = qw ( name John age 42 ). $inventory{apples}=42.. use the "%" sigil. Perl will extract them in pairs. warn $data. > 'pears' => 17 > }. If the key does not exist during an ASSIGNMENT. The "%" is a stylized "key/value" pair. (Do Not Panic. A hash stores a bunch of scalars that are accessed via a string index called a "key" Perl hashes are ONE-DIMENSIONAL ONLY. $inventory{pears}=17. there is. >$VAR1 = { > 'bananas' => 5..2. the key is created and given the assigned value. the "%" character changes to a "$" my %info = qw ( name John age 42 ). 34 of 138 . The keys of a hash are not pre-declared. > John . $inventory{bananas}=5.3 Hashes Hashes are preceded with a percent sign sigil. print Dumper \%inventory. my $data = $info{name}. > 'apples' => 42.) You can assign any even number of scalars to a hash. When you refer to an entire hash. The first item will be treated as the key. and the second item will be treated as the value. When you look up a key in the hash. (Well. my %stateinfo. 35 of 138 . which has the side effect of creating the key {Maine} in the hash. > $VAR1 = { > 'Florida' => { > 'Abbreviation' => 'FL' > }. since a key might exist but store a value of FALSE my %pets = ( cats=>2.3. } print Dumper \%stateinfo. > Use of uninitialized value in concatenation > peaches is '' at ./test. > $VAR1 = { > 'apples' => 42 > }. if (exists($stateinfo{Maine}->{StateBird})) { warn "it exists". my %inventory. 2. print Dumper \%inventory. my $peaches = $inventory{peaches}. $stateinfo{Florida}->{Abbreviation}='FL'.If the key does not exist during a FETCH. warn "peaches is '$peaches'". dogs=>1 ). but this is a "feature" specific to exists() that can lead to very subtle bugs. all the lower level keys are autovivified. we explicitely create the key "Florida". } Warning: during multi-key lookup. $inventory{apples}=42. References are covered later. but we only test for the existence of {Maine}->{StateBird}. and only the last key has exists() tested on it. unless(exists($pets{fish})) { print "No fish here\n".1 exists ( $hash{$key} ) Use exists() to see if a key exists in a hash. the key is NOT created. > 'Maine' => {} > }. Note in the following example.pl line 13. This only happens if you have a hash of hash references. You cannot simply test the value of a key. and undef is returned. cats=>2. > $VAR1 = { > 'cats' => undef.2 delete ( $hash{key} ) Use delete to delete a key/value pair from a hash. ). $pets{cats}=undef. assigning undef to it will keep the key in the hash and will only assign the value to undef. Once a key is created in a hash. delete($pets{fish}).3. } } print Dumper \%stateinfo. and build your way up to the final key lookup if you do not want to autovivify the lower level keys. $stateinfo{Florida}->{Abbreviation}='FL'. print Dumper \%pets. if (exists($stateinfo{Maine})) { if (exists($stateinfo{Maine}->{StateBird})) { warn "it exists". > 'dogs' => 1 > }. The only way to remove a key/value pair from a hash is with delete().You must test each level of key individually. my %stateinfo. 36 of 138 . my %pets = ( fish=>3. dogs=>1. > $VAR1 = { > 'Florida' => { > 'Abbreviation' => 'FL' > } > }. 2. foreach my $pet (keys(%pets)) { print "pet is '$pet'\n". } > pet is 'cats' > pet is 'dogs' > pet is 'fish' If the hash is very large.3.3 keys( %hash ) Use keys() to return a list of all the keys in a hash. 37 of 138 . dogs=>1. cats=>2. then you may wish to use the each() function described below. and should not be something your program depends upon. Note in the example below that the order of assignment is different from the order printed out.2. The order of the keys will be based on the internal hashing algorithm used. my %pets = ( fish=>3. ). 2.3.4 values( %hash ). 2.3.5 each( %hash )' 38 of 138(). 39 of 138; one_time; keys(%pets); one_time; one_time; one_time; one_time; one_time; one_time; > > > > > > > > pet='cats', pet='dogs', pet='cats', pet='dogs', pet='fish', end of hash pet='cats', pet='dogs', qty='2' qty='1' qty='2' qty='1' qty='3' qty='2' qty='1' # # # # # # # # # cats dogs reset the hash iterator cats dogs fish end of hash cats dogs 40 of 138 "than $cmp_pet\n". and the process repeats itself indefinitely. The inside loop calls each() and gets "dogs". The code then enters the inside loop. The inside loop continues. and returns "cats".$orig_qty)=each(%pets)) { while(my($cmp_pet. are are are are are are are are more less more less more less more less cats cats cats cats cats cats cats cats than than than than than than than than dogs fish dogs fish dogs fish dogs fish The outside loop calls each() and gets "cats". calls each() again.$cmp_qty)=each(%pets)) { if($orig_qty>$cmp_qty) { print "there are more $orig_pet " . which means calling each() on a hash in a loop that then calls each() on the same hash another loop will cause problems.There is only one iterator variable connected with each hash. cats=>2. The example below goes through the %pets hash and attempts to compare the quantity of different pets and print out their comparison. The inside loop calls each() one more time and gets an empty list."than $cmp_pet\n".. } } } > > > > > > > > > there there there there there there there there . The inside loop exits.. } else { print "there are less $orig_pet " . The outside loop calls each() which continues where the inside loop left off. while(my($orig_pet. namely at the end of the list. and gets "fish". ). 41 of 138 . dogs=>1. my %pets = ( fish=>3. my %pets = ( fish=>3. dogs=>1. } } } > > > > > > there there there there there there are are are are are are more less less less more more cats cats dogs dogs fish fish than than than than than than dogs fish fish cats cats dogs If you do not know the outer loop key. ).One solution for this each() limitation is shown below. either because its in someone else's code and they do not pass it to you.$orig_qty)=each(%pets)) { while(1) { my($cmp_pet. The inner loop continues to call each() until it gets the key that matches the outer loop key. This also fixes a problem in the above example in that we probably do not want to compare a key to itself. The inner loop will then not rely on the internal hash iterator value. 42 of 138 ."than $cmp_pet\n". then the only other solution is to call keys () on the hash for all inner loops."than $cmp_pet\n". if($orig_qty>$cmp_qty) { print "there are more $orig_pet " . The inner loop must skip the end of the hash (an undefined key) and continue the inner loop. while(my($orig_pet. last if($cmp_pet eq $orig_pet). } else { print "there are less $orig_pet " . next unless(defined($cmp_pet)). cats=>2. store the keys in an array.$cmp_qty)=each(%pets). and loop through the array of keys using foreach. or some similar problem. my @cart1=qw( milk bread butter). The person behind the @checkout_counter has no idea whose groceries are whose.4 List Context List context is a concept built into the grammar of perl. my @cart1 = qw( milk bread eggs). > 'eggs'. The initial values for the hash get converted into an ordered list of scalars ( 'fish'. In fact. In the above example the order of scalars is retained: milk. but there is no way to know. > 'bacon'. bread. dogs=>1 ). using the first scalar as a key and the following scalar as its value.2. Basically. @cart1 might have been empty. 2. However. > 'bread'. > $VAR1 = [ > 'milk'. pulled up to the @checkout_counter and unloaded their carts without putting one of those separator bars in between them. 'cats'. Sometimes. 3. List context affects how perl executes your source code. > 'butter'. my @checkout_counter = ( @cart1. @cart1 and @cart2. print Dumper \@checkout_counter. my @cart2=qw( eggs bacon juice ). Everything in list context gets reduced to an ordered series of scalars. We have used list context repeatedly to initialize arrays and hashes and it worked as we would intuitively expect: my %pets = ( fish=>3. butter is the order of scalars in @cart1 and the order of the scalars at the beginning of @checkout_counter. list context can be extremely handy. 43 of 138 . 1 ) These scalars are then used in list context to initialize the hash. and so on throughout the list. The original container that held the scalars is forgotten. 'dogs'. and all the contents of @checkout_counter could belong to @cart2. You cannot declare a "list context" in perl the way you might declare an @array or %hash. looking at just @checkout_counter. there is no way to know where the contents of @cart1 end and the contents of @cart2 begin. cats=>2. two people with grocery carts. @cart2 ). > 'juice' > ]. Here is an example. > 'chair' > ]. Scalars. and hashes are all affected by list context. we can simply treat the %encrypt hash as a list. and then store that reversed list into a %decrypt hash. @house is intended to contain a list of all the items in the house. There are times when list context on a hash does make sense. This could take too long. > 1. Instead. print Dumper \@house. dogs=>1 ). > 2. call the array reverse() function on it. The %encrypt hash contains a hash look up to encrypt plaintext into cyphertext.1 are disassociated from their keys. because there is no overlap between keys and values. my @refridgerator=qw(milk bread eggs). print Dumper \%decrypt. > 'turtle' => 'tank' > }. > 'dogs'.bomber=>'eagle'). cats=>2. Anytime you receive the word "eagle" you need to translate that to the word "bomber". which flips the list around from end to end. 44 of 138 . Anytime you want to use the word "bomber". However. you actually send the word "eagle".List context applies anytime data is passed around in perl.'chair'). > $VAR1 = { > 'eagle' => 'bomber'. > 'milk'. > 'bread'.2. looping until it found the value that matched the word received over the radio. the values 3. The @house variable is not very useful. because the %pets hash was reduced to scalars in list context. > 3. (two different words dont encrypt to the same word). > 'cats'. > 'fish'. my %decrypt=reverse(%encrypt) . In the example below. Using the %encrypt hash to perform decryption would require a loop that called each() on the %encrypt hash. The decryption is the opposite.%pets. > 'eggs'.@refridgerator. arrays. my %pets = ( fish=>3. my @house=('couch'. my %encrypt=(tank=>'turtle'. >$VAR1 = [ > 'couch'. milk=>1. you can take your license and "dereferencing" it to get yourself home. my %home= ( fish=>3. delete($ {$license_for_bob} {milk}). Your license is a "reference".5 References References are a thing that refer (point) to something else. $ {$license_for_alice} {dogs} += 5. But there can only be one home per address.cats=>2. The "something else" is called the "referent". ).2. And if you have forgotten where you live. array. > $VAR1 = { > 'eggs' => 12. To do this. Alice and Bob need to dereference their licenses and get into the original %home hash. Alice and Bob are roommates and their licenses are references to the same %home. It is possible that you have roommates.dogs=>1. > 'dogs' => 6.eggs=>12. my $license_for_bob = \%home. A good real-world example is a driver's license.bread=>2. > 'fish' => 3 > }. which would mean multiple references exist to point to the same home. print Dumper \%home. The "referent" is your home. This means that Alice could bring in a bunch of new pets and Bob could eat the bread out of the refridgerator even though Alice might have been the one to put it there. the thing being pointed to. > 'bread' => 2. hash) and putting a "\" in front of it. In perl. 45 of 138 . Your license "points" to where you live because it lists your home address. my $license_for_alice = \%home. > 'cats' => 2. You can create a reference by creating some data (scalar. Taking a reference and using it to access the referent is called "dereferencing". references are stored in scalars. > age is '44' 46 of 138 .cats=>2. Below. > age is '43' If there is no ambiguity in dereferencing. $$ref_to_age ++.2 References to Named Referents A reference points to the referent. To take a reference to a named referent. my $r_pets = \%pets. print "age is '$age'\n". or hash.1 Named Referents A referent is any original data structure: a scalar. my %pets=(fish=>3.5. # another birthday print "age is '$age'\n".3 Dereferencing To dereference. my $r_2_colors = \@colors.5. my %copy_of_pets = %{$r_pets}. 2. we declare some named referents: age.2. my $age = 42. the curly braces are not needed. This will give access to the entire original referent. place the reference in curly braces and prefix it with the sigil of the appropriate type. my $ref_to_age = \$age.5. 2.dogs=>1). array. colors. # happy birthday pop(@{$r_2_colors}). ${$ref_to_age}++. and pets. put a "\" in front of the named referent. my @colors = qw( red green blue ). dogs=>1). follow it by "->". > $VAR1 = [ 'red'. $r_colors = \@colors.dogs=>1). $r_age = \$age. my %pets=(fish=>3. ${$r_colors}[1] = 'yellow'. 'fish' => 3 }.4 Anonymous Referents Here are some referents named age. An anonymous referent has no name for the underlying data structure and can only be accessed through the reference. and then follow that by either "[index]" or "{key}". It is also possible in perl to create an ANONYMOUS REFERENT. @colors = qw( red green blue ). my $r_colors = \@colors. 2. my $r_pets = \%pets.It is also possible to dereference into an array or hash with a specific index or key. my @colors = qw( red green blue ). perl has a shorthand notation for indexing into an array or looking up a key in a hash using a reference. is exactly the same as this: $r_pets->{dogs} += 5. %pets=(fish=>3.5. ${$r_pets} {dogs} += 5. print Dumper \%pets. ${$r_colors}[1] = 'yellow'. 'dogs' => 6.cats=>2. colors.cats=>2. and pets. Because array and hash referents are so common. This: ${$r_pets} {dogs} += 5. $VAR1 = { 'cats' => 2. my my my my my my $age = 42. Each named referent has a reference to it as well. print Dumper \@colors. $r_colors->[1] = 'yellow'. $r_pets = \%pets. 'yellow'. 47 of 138 . 'blue' ]. Take the reference. and return a reference to that unnamed array. my $pets_ref = { fish=>3. > 'green'. 48 of 138 . > 'fish' => 3 > }.cats=>2. The square brackets will create the underlying array with no name. To create an anonymous hash referent. > $VAR1 = { > 'cats' => 2. put the contents of the array in square brackets.dogs=>1 }.To create an anonymous array referent. and return a reference to that unnamed hash. print Dumper $colors_ref. > $VAR1 = [ > 'red'. 'blue' ]. put the contents of the hash in square brackets. > 'blue' > ]. > 'dogs' => 1. The square brackets will create the underlying hash with no name. 'green'. print Dumper $pets_ref. my $colors_ref = [ 'red'. my $house={ pets=>\%pets. but now using references. You must use $colors_ref to access the data in the array. 49 of 138 . my %pets = ( fish=>3. one a hash reference and the other an array reference. Using references is one way to avoid the problems associated with list context. Dereferencing a complex data structure can be done with the arrow notation or by enclosing the reference in curly braces and prefixing it with the appropriate sigil. # Alice added more canines $house->{pets}->{dogs}+=5. 'bread'. => 2. Likewise. => 3 [ 'milk'. You must use $pets_ref to access the data in the hash. => 1. > $VAR1 = { > 'pets' => { > 'cats' > 'dogs' > 'fish' > }. print Dumper $house. 2. my @refridgerator=qw(milk bread eggs). 'eggs' ] The $house variable is a reference to an anonymous hash. Here is another look at the house example. complex data structures are now possible. cats=>2. refridgerator=>\@refridgerator }. but that hash has no name to directly access its data. These keys are associated with values that are references as well. > 'refridgerator' => > > > > > }.5 Complex Data Structures Arrays and hashes can only store scalar values. # Bob drank all the milk shift(@{$house->{refridgerator}}). dogs=>1 ). $pets_ref is a reference to a hash. "pets" and "refridgerator".Note that $colors_ref is a reference to an array. But because scalars can hold references.5. which contains two keys. but that array has no name to directly access its data. > $VAR1 = [ > undef. my $val = $scal->[2]->{somekey}->[1]->{otherkey}->[1]. > ${\$VAR1->[0]}. my $scal. Perl will assume you know what you are doing with your structures. > { > 'somekey' => [ > ${\$VAR1->[0]}.2. 50 of 138 . > { > 'otherkey' => [] > } > ] > } > ]. print Dumper $scal.5. We then fetch from this undefined scalar. The autovivified referents are anonymous.5. Perl autovivifies everything under the assumption that that is what you wanted to do.1 Autovivification Perl autovivifies any structure needed when assigning or fetching from a reference. In the example below. we start out with an undefined scalar called $scal. as if it were a reference to an array of a hash of an array of a hash of an array. $i++) { for(my $j=0. depth=1' col=1. such as print it. > [ > 'row=0.2 Multidimensional Arrays Perl implements multidimensional arrays using one-dimensional arrays and references.6 Stringification of References Perl will stringify a reference if you try to do anything string-like with it. > 'row=0. > ].5. > reference is 'SCALAR(0x812e6ec)' 51 of 138 .5. depth=0'. my $referent = 42.$i<2. my $mda. col=0.5. > $VAR1 = [ > [ > [ > 'row=0. col=0. depth=1' col=1. > ].$k<2. col=$j. for(my $i=0. > 'row=0. my $reference = \$referent. warn "reference is '$reference'". > [ > [ > 'row=1. } } } print Dumper $mda.$j++) { for(my $k=0. depth=0'. depth=$k".$k++) { $mda->[$i]->[$j]->[$k] = "row=$i.2.$j<2. depth=0'. col=1. > [ > 'row=1. > ] > ]. col=1. depth=0'. depth=1' 2. col=0. > ] > ] > ]. depth=1' col=0. > 'row=1. > 'row=1. 2. it will evaluate true when treated as a boolean. warn "value not defined" unless(defined($value)). my $reference = 'SCALAR(0x812e6ec)'. no strict. > Can't use string ("SCALAR(0x812e6ec)") as > a SCALAR ref while "strict refs" in use Turning strict off only gives you undef. my $value = $$reference. what_is_it( [1. ref() returns false (an empty string). } what_is_it( \'hello' ). > > > > string string string string is is is is 'SCALAR' 'ARRAY' 'HASH' '' 52 of 138 . my $temp = \42. what_is_it( {cats=>2} ). my $reference = 'SCALAR(0x812e6ec)'. If the scalar is not a reference.2.7 The ref() function The ref() function takes a scalar and returns a string indicating what kind of referent the scalar is referencing. even if the value to which it points is false. what_is_it( 42 ).5.But perl will not allow you to create a string and attempt to turn it into a reference.3] ). warn "string is '$string'". my $string = ref($temp). my $string = ref($scalar). print "string is '$string'\n". > string is 'SCALAR' Here we call ref() on several types of variable: sub what_is_it { my ($scalar)=@_. warn "value is '$value'\n". > value not defined > Use of uninitialized value in concatenation Because a reference is always a string that looks something like "SCALAR(0x812e6ec)". my $value = $$reference. Note that this is like stringification of a reference except without the address being part of the string. print $greeting. 3 Control Flow Standard statements get executed in sequential order in perl. my $name = 'John Smith'. you get the scalar value. if( $price == 0 ) { print "Free Beer!\n". But if you call ref() on a nonreference. Also note that if you stringify a non-reference. you get an empty string. which is always false. Instead of SCALAR(0x812e6ec). its just SCALAR. } 53 of 138 . $name\n". my $greeting = "Hello. Control flow statements allow you to alter the order of execution as the program is running. TEST. example==> MY_NAME: # # BLOCK ==> zero or more statements contained # in curly braces { print "hi"..Perl supports the following control flow structures: # # LABEL is an optional name that identifies the # control flow structure. 54 of 138 . It is a bareword identifier # followed by a colon. TEST. A label is an identifier followed by a colon.... # see arrays and list context sections later in text LABEL foreach (LIST) BLOCK LABEL foreach VAR (LIST) BLOCK LABEL foreach VAR (LIST) BLOCK continue BLOCK 3. if (BOOL) BLOCK elsif (BOOL) BLOCK . } LABEL BLOCK LABEL BLOCK continue BLOCK # BOOL ==> boolean (see boolean section above) if (BOOL) BLOCK if (BOOL) BLOCK else BLOCK if (BOOL) BLOCK elsif (BOOL) BLOCK elsif (). BLOCK elsif (BOOL) BLOCK . else BLOCK LABEL while (BOOL) BLOCK LABEL while (BOOL) BLOCK continue BLOCK LABEL until (BOOL) BLOCK LABEL until (BOOL) BLOCK continue BLOCK # INIT. CONT ) BLOCK # LIST is a list of scalars..1 Labels Labels are always optional.... A label is used to give its associated control flow structure a name. else BLOCK unless unless unless unless (BOOL) (BOOL) (BOOL) (BOOL) BLOCK BLOCK else BLOCK BLOCK elsif (BOOL) BLOCK elsif ().. If the structure has a LABEL. 3. The next command skips the remaining BLOCK. or if no continue block exists. or redo. 55 of 138 .2 last LABEL. It does not execute any continue block if one exists. eval. execution starts the next iteration of the control construct if it is a loop construct.Inside a BLOCK of a control flow structure. redo LABEL. then the command will operate on the control structure given. if there is a continue block. The last command goes to the end of the entire control structure. subroutine.3 next LABEL. 3. It does not execute any continue block (even if it exists). 4 Packages and Namespaces and Lexical Scoping 4. This package declaration indicates that the rest of the enclosing block. The redo command skips the remaining BLOCK. If no label is given to next. If a label is given. After the continue block finishes. 3. last LABEL. Execution then resumes at the start of the control structure without evaluating the conditional again. or file belongs to the namespace given by NAMESPACE. redo. last. then the command will operate on the inner-most control structure.4 redo LABEL. you can call next LABEL. execution resumes there. you can call next. last.1 Package Declaration Perl has a package declaration statement that looks like this: package NAMESPACE. 0 or later for "our" declarations. followed by the variable name. perl assumes it is in the the most recently declared package namespace that was declared. use Data::Dumper. You can create variables in ANY namespace you want. there are two ways to create and use package variables: 1) Use the fully package qualified name everywhere in your code: # can use variable without declaring it with 'my' $some_package::answer=42. warn "The value is '$some_package::answer'\n". Anytime you declare a new package namespace. The difference between the two methods is that always using package qualified variable names means you do NOT have to declare the package you are in.The standard warnings. When you have strict-ness turned on. followed by a double colon. package SomeOtherPackage. @other_package::refridgerator." etc. 56 of 138 . %package_that::pets. You can access package variables with the appropriate sigil. use warnings. You can even declare variables in someone else's package namespace. All perl scripts start with an implied declaration of: package main. warn "name is '$name'". package this_package. $package_this::age. strictness. 4.2 Declaring Package Variables With our 2) Use "our" to declare the variable. This is called a "package QUALIFIED" variable where the package name is explicitely stated. and Data::Dumper are attached to the namespace in which they were turned on with "use warnings. use strict. You must have perl 5.6. If you use an UNQUALIFIED variable in your code. followed by the package name. without ever having to declare the namespace explicitely. you will want to "use" these again. Using "our" is the preferred method. our $name='John'. There is no restrictions in perl that prevent you from doing this. 57 of 138 . that package namespace declaration remains in effect until the end of the block. is 'oink' 4. the "our Heifers. } # END OF CODE BLOCK print "speak is '$speak'\n". as does all the variables that were declared in that namespace. print "Hogs::speak > Hogs::speak is '$Hogs::speak'\n". Once the package variable exists. > speak is 'oink' The Heifers namespace still exists. and you can refer to the variable just on its variable name. This "wearing off" is a function of the code block being a "lexical scope" and a package declaration only lasts to the end of the current lexical scope. We do not HAVE to use the "our" shortcut even if we used it to declare it. the "our" function was created. package Hogs. the fully package qualified name is NOT required. then you will need to declare the package namespace explicitely. once a package variable is declared with "our". we can access it any way we wish. our $speak = 'oink'. the package namespace reverts to the previous namespace. as example (2) above refers to the $name package variable. Declaring a variable with "our" will create the variable in the current namespace. Its just that outside the code block. We could also access the Hogs variables using a fully package qualified name.To encourage programmers to play nice with each other's namespaces. and we now have to use a fully package qualified name to get to the variables in Heifers package. If the namespace is other than "main"." declaration has worn off. our $speak = 'oink'.3 Package Variables inside a Lexical Scope When you declare a package inside a code block. { # START OF CODE BLOCK package Heifers. The "our" declaration is a shorthand for declaring a package variable. at which time. package Hogs. However. our $speak = 'moo'. Outside those curly braces. So "lexical scope" refers to anything that is visible or has an effect only withing a certain boundary of the source text or source code. 4. so it does not exist when we try to print it out after the block. the "package Heifers. no warnings. > Heifers::speak is 'moo' 4. so it will throw an exception and quit. things that have lexical limitations (such as a package declaration) are only "visible" inside that lexical space. 58 of 138 ." only exists inside the curly braces of the source code. > speak is '' The lexical variable "$speak" goes out of scope at the end of the code block (at the "}" character). { package Heifers. In the above examples. perl will know $speak does not exist when you attempt to print it. Within a lexical scope. the package declaration has gone out of scope. and to show how lexical variables differ from "our" variables. A lexical scope exists while execution takes place inside of a particular chunk of source code.5 Lexical Variables Lexical variables declared inside a lexical scope do not survive outside the lexical scope.4 Lexical Scope Lexical refers to words or text.The package variables declared inside the code block SURVIVE after the code block ends. as in telescope. Scope refers to vision. We had to turn warnings and strict off just to get it to compile because with warnings and strict on. our $speak = 'moo'. } warn "speak is '$speak'\n". { my $speak = 'moo'. } print "Heifers::speak is '$Heifers::speak'\n". no strict. which is a technical way of saying its "worn off". The easiest way to demonstrate lexical scoping is lexical variables. { my $some_lex = 'I am lex'. > some_lex is '' 3) Lexical variables are subject to "garbage collection" at the end of scope. or file. If nothing is using a lexical variable at the end of scope. $lex_ref). we create a new $lex_var each time through the loop. eval. They also keep your data more private than a package variable. > main::cnt is '' 2) Lexical variables are only directly accessible from the point where they are declared to the end of the nearest enclosing block. and are accessible from anyone's script. They generally keep you from stepping on someone else's toes. perl will remove it from its memory. package main. Every time a variable is declared with "my". my @cupboard.. } > > > > >. print "$lex_ref\n". no strict.Lexically scoped variables have three main features: 1) Lexical variables do not belong to any package namespace. it is created dynamically. 59 of 138 . subroutine. The location of the variable will change each time. 5) { my $lex_var = 'canned goods'. my $lex_ref = \$lex_var. and $lex_var is at a different address each time. never go out of scope. Package variables are permanent. never get garbage collected. warn "main::cnt is '$main::cnt'". Note in the example below. push(@cupboard. so you cannot prefix them with a package name: no warnings. } warn "some_lex is '$some_lex'". for (1 . during execution. my $cnt='I am just a lexical'. } 4. Perl will not garbage collect these variables even though they are completely inaccessible by the end of the code block. my $referring_var. It is rudimentary reference counting. 60 of 138 . If a lexical variable is a referent to another variable.6. warn "referring var refers to '$$referring_var'". perl will check to see if anyone is using that variable. This means that your program will never get smaller because of lexical variables going of of scope.$last)=(\$last. rather the freed up memory is used for possible declarations of new lexically scoped variables that could be declared later in the program. } warn "some_lex is '$some_lex'".1 Reference Count Garbage Collection Perl uses reference count based garbage collection. and it retained its value of "I am lex". so circular references will not get collected even if nothing points to the circle. The data in $some_lex was still accessible through referring_var. { my ($first. But since $referring_var is a reference to $some_lex.6. The example below shows two variables that refer to each other but nothing refers to the two variables.4. perl will delete that variable and free up memory. then $some_lex was never garbage collected. no strict. { my $ referring var refers to 'I am lex' When the lexical $some_lex went out of scope. $referring_var=\$some_lex. ($first. The freed up memory is not returned to the system. If a subroutine uses a lexical variable. Once the memory is allocated for perl.\$first).$last). then that variable will not be garbage collected as long as the subroutine exists.2 Garbage Collection and Subroutines Garbage collection does not rely strictly on references to a variable to determine if it should be garbage collected. print "cnt is '$cnt'\n". print "cnt is '$cnt'\n". dec. named subroutines are like package variables in that. Note that a reference to $cnt is never taken. so $cnt is not garbage collected. and they inherit all the possible problems associated with using global variables in your code. However.} } inc. In the example below. { my $cnt=0. two subroutines are declared in that same code block that use $cnt. 61 of 138 . > > > > > > cnt cnt cnt cnt cnt cnt is is is is is is '1' '2' '3' '2' '1' '2' Subroutine names are like names of package variables.} sub dec{$cnt--. 4. They are global.Subroutines that use a lexical variable declared outside of the subroutine declaration are called "CLOSURES". dec. once declared. inc. sub inc{$cnt++. however perl knows that $cnt is needed by the subroutines and therefore keeps it around. Therefore. which means they can be a convenient way for several different blocks of perl code to talk amongst themselves using an agreed upon global variable as their channel. the lexical variable. In the event you DO end up using a package variable in your code. Since $cnt goes out of scope. inc. they never go out of scope or get garbage collected.7 Package Variables Revisited Package variables are not evil. $cnt. the only things that can access it after the code block are the subroutines. they are just global variables. they do have some advantages. is declared inside a code block and would normally get garbage collected at the end of the block. inc. The subroutine gets placed in the current declared package namespace. } } Compile. } } sub Run { if ($Development::Verbose) { print "running\n". If it is false. > compiling > linking > running The three subroutines could be in different files. 62 of 138 .8 Calling local() on Package Variables When working with global variables. sub Compile { if ($Development::Verbose) { print "compiling\n".Imagine several subroutines across several files that all want to check a global variable: $Development::Verbose. and then set the global back to what it was. these subroutines print little or no information. package Development. in different package namespaces. Run. Link. set it to a new and temporary value. execute some foreign code that will access this global. there are times when you want to save the current value of the global variable. and they could all access the $Development::Verbose variable and act accordingly. } } sub Link { if ($Development::Verbose) { print "linking\n". 4. If this variable is true. these subroutines print detailed information. our $Verbose=1. calls the original Run routine. perl was given the local() function. Link. the original value for the variable is returned. $Development::Verbose=$temp. The RunSilent subroutine could be written like this: sub RunSilent { local($Development::Verbose)=0. So to deal with all the package variables. saves off the original value. } } sub RunSilent { my $temp = $Development::Verbose. sub Compile { if ($Development::Verbose) { print "compiling\n".Continuing the previous example. The local function takes a package variable. 63 of 138 . And at the end of the lexical scope in which local() was called. say we wish to create a RunSilent subroutine that stores $Development::Verbose in a temp variable. > compiling > linking This can also be accomplished with the "local()" function. Run. $Development::Verbose=0. Run. That new value is seen by anyone accessing the variable. Local is also a good way to create a temporary variable and make sure you dont step on someone else's variable of the same name. allows you to assign a temp value to it. } } sub Link { if ($Development::Verbose) { print "linking\n". } } sub Run { if ($Development::Verbose) { print "running\n". package Development. our $Verbose=1. and then sets $Development::Verbose back to its original value. } Compile. The "my" lexical variables were not introduced until perl version 4. } Perl originally started with nothing but package variables. RunSilent. and hashes are mandatory. in the same way "our" variables go into the current package namespace. package MyArea. sub Ping {print "ping\n".1 Subroutine Sigil Subroutines use the ampersand ( & ) as their sigil. you may access it with just NAME if you are in the correct package. the sigil for subroutines is optional. MyArea::Ping. &Ping.2 Named Subroutines Below is the named subroutine declaration syntax: sub NAME BLOCK NAME can be any valid perl identifier. or with a fully package qualified name if you are outside the package. &MyArea::Ping.5 Subroutines Perl allows you to declare named subroutines and anonymous subroutines. The NAME of the subroutine is placed in the current package namespace. And you can use the optional ampersand sigil in either case. But while the sigils for scalars. So once a named subroutine is declared. similar to the way you can declare named variables and anonymous variables. BLOCK is a code block enclosed in parenthesis. 5. > > > > ping ping ping ping 64 of 138 . arrays.} Ping. 5. } package YourArea. For this reason. sub Ping {print "ping\n". > ref returned 'CODE' 5. sub what_is_it { my ($scalar)=@_. my $string = ref($scalar). things like Data::Dumper cannot look inside the code block and show you the actual code. looking in current package YourArea > ping > ping > Undefined subroutine &YourArea::Ping 5. you MUST use a fully package qualified subroutine name to call the subroutine. package MyArea. and similar to how {} returns a hash reference. print Dumper $temp. &MyArea::Ping. Instead it does not even try and just gives you a place holder that returns a dummy string.Once the current package declaration changes.3 Anonymous Subroutines Below is the anonymous subroutine declaration syntax: sub BLOCK This will return a code reference. what_is_it($temp). } my $temp = sub {print "Hello\n". # error. print "ref returned '$string'\n". > $VAR1 = sub { "DUMMY" }. my $temp = sub {print "Hello\n". similar to how [] returns an array reference.}.4 Data::Dumper and subroutines The contents of the code block are invisible to anything outside the code block. 65 of 138 . MyArea::Ping. &Ping.}. a subroutine can be declared with a prototype. assigning a value to an element in @_ will change the value in the original variable that was passed into the subroutine call. The subroutine will not know if the list of scalars it recieves came from scalars.5. sub swap { (@_) = reverse(@_). warn "two is '$two'". } 66 of 138 . swap($one. these arguments fit nicely into an array. warn "one is '$one'". sub swap { my ($left. > one is 'I am two' > two is 'I am one' Assigning to the entire @_ array does not work.$right)=@_.$right)=@_. return $left<=>$right.6 Accessing Arguments inside Subroutines via @_ Inside the subroutine.$two). you have to assign to the individual elements. the arguments are accessed via a special array called @_. my $two = "I am two". } my $one = "I am one". arrays. The @_ array can be processed just like any other regular array. Subroutine parameters are effectively IN/OUT. which are discussed later. the preferred way to extract them is to assign @_ to a list of scalars with meaningful names. If swap were defined like this.$left). since all the arguments passed in were reduced to list context. Therefore. If the arguments are fixed and known.5 Passing Arguments to/from a Subroutine Any values you want to pass to a subroutine get put in the parenthesis at the subroutine call. 5. To avoid some of the list context crushing. the variables $one and $two would remain unchanged. } The @_ array is "magical" in that it is really a list of aliases for the original arguments passed in. all arguments go through the list context crushing machine and get reduced to a list of scalars. or hashes. @_ = ($right. For normal subroutines. The original containers are not known inside the subroutine. sub compare { my ($left. # explicitly pass @_ $code_ref->( 'one'. # pass new parameters 67 of 138 . > 2. This can cause subtly odd behaviour if you are not expecting it. dereferencing using the "&" may cause imlied arguments to be passed to the new subroutine. my $temp = sub {print "Hello\n". > 3 > ]. For this reason. 'two' ). &$temp. # preferred > Hello > Hello > Hello 5. sub second_level { print Dumper \@_. This generally is not a problem with named subroutines because you probably will not use the "&" sigil. &second_level.7 Dereferencing Code References Dereferencing a code reference causes the subroutine to be called. $temp->(). the current @_ array gets implicitely passed to the subroutine being called. $code_ref->().2. no implicit @_ $code_ref->(@_). > $VAR1 = [ > 1. &{$temp}.5. the arrow operator is the preferred way to dereference a code reference.}. However.3). The preferred way is to use the arrow operator with parens. # pass nothing. A code reference can be dereferenced by preceding it with an ampersand sigil or by using the arrow operator and parenthesis "->()". } sub first_level { # using '&' sigil and no parens. } first_level(1.8 Implied Arguments When calling a subroutine with the "&" sigil prefix and no parenthesis. # doesn't look like I'm passing any params # but perl will pass @_ implicitely. when using code referernces. my @arr_var = this_is_false.3). Returning a simple "undef" value (or 0 or 0. > 2. but in array context. # return a single scalar sub ret_scal { return "boo". > $VAR1 = \'boo'. } my $scal_var = ret_scal. This is the preferred way to return false in a subroutine. > 3 > ]. it will create an array with the first element set to undef. 5. print Dumper \$scal_var. an array with one or more elements is considered true.5. 68 of 138 . A return statement by itself will return undef in scalar context and an empty list in list context. > $VAR1 = [ > 1. } my @arr_var = ret_arr.2.0 or "") will work in scalar context. print Dumper \@arr_var. sub this_is_false { return.10 Returning False The return value of a subroutine is often used within a boolean test. # return a list of values sub ret_arr { return (1.9 Subroutine Return Value Subroutines can return a single value or a list of values. In boolean context. # undef or empty list } my $scal_var = this_is_false. or it can be implied to be the last statement of the subroutine. An explicit return statement is the preferred approach if any return value is desired. The problem is that the subroutine needs to know if it is called in scalar context or array context. The return value can be explicit. > 'UUUUUUUUUUUU' > ].pl'. print Dumper \@info. and it occurred at line 13 of the file. Caller takes one argument that indicates how far back in the call stack to get its information from. > 1. void=undef evaluated text if an eval block true if created by "require" or "use" internal use only. } HowWasICalled. > 2. The call occurred in package main. that information is hidden in lexical scope. disregard Note in the example above. > undef. 69 of 138 .pl. For information about the current subroutine. > 13. The package qualified name of the subroutine that was called was main::HowWasICalled.11 Using the caller() Function in Subroutines The caller() function can be used in a subroutine to find out information about where the subroutine was called from and how it was called. scalar=0. The caller() function returns a list of information in the following order 0 1 2 3 4 5 6 7 8 9 $package $filename $line $subroutine $hasargs $wantarray $evaltext $is_require $hints $bitmask package namespace at time of call filename where called occurred line number in file where call occurred name of subroutine called true if explicit arguments passed in list=1. > 'main::HowWasICalled'. I ran the code in a file called test. disregard internal use only. sub HowWasICalled { my @info = caller(0). The package qualified name must be given since you dont know what package is current where the subroutine was called from. > '. the default package namespace.5. >$VAR1 = [ > 'main'./test. > undef. > undef. use caller(0). Or it could have been called and the return value assigned to a scalar. } CheckMyWantArray. print "wantarray is '$wantarray'\n". my $wantarray = $info[5]. sub CheckMyWantArray { my @info = caller(0). $ wantarray is '0' > wantarray is '1' 70 of 138 . # undef my $scal = CheckMyWantArray. The subroutine could have been called in void context meaning the return value is thrown away. This indicates what return value is expected of the subroutine from where it was called.12 The caller() function and $wantarray The argument of interest is the $wantarray argument. > scal is '3' >$VAR1 = [ > 'alpha'. BEGIN -> execute block as soon as it is compiled. and END. Interpreting: executing the machine usable. print "scal is '$scal'\n". The BEGIN block is immediate. They are code blocks that are prefixed with BEGIN.5. CHECK. my $scal = ArrayProcessor(@arr). > 'charlie' > ]. INIT. Compiling: translating the source text into machine usable internal format. it will always be in one of two modes: compiling or interpreting. even before compiling anything else. 71 of 138 . internal format.. # 3 my @ret_arr = ArrayProcessor(@arr).. > 'bravo'. Perl has some hooks to allow access into these different cycles. } else { return scalar(@_). } } my @arr=qw(alpha bravo charlie). return unless(defined($wantarray)). if($wantarray) { return @_. ArrayProcessor(@arr).13 Using wantarray to Create Context Sensitive Subroutines You can use the wantarray variable from caller() to create a subroutine that is sensitive to the context in which it was called. sub ArrayProcessor { my @info = caller(0). my $wantarray = $info[5]. # alpha . print Dumper \@ret_arr. 6 Compiling and Interpreting When perl works on your source code. INIT { print "INIT 2\n" } BEGIN { print "BEGIN 2\n" CHECK { print "CHECK 2\n" END { print "END 2\n" } > > > > > > > > > BEGIN 1 BEGIN 2 CHECK 2 CHECK 1 INIT 1 INIT 2 normal END 2 END 1 } } } } 7 Code Reuse. A perl module is really just a file with an invented name and a ". INIT-> Schedule these blocks for execution after the CHECK blocks have executed. The "pm" is short for "perl moduled". END -> Schedule for execution after normal code has completed. Perl Modules Lets say you come up with some really great chunks of perl code that you want to use in several different programs. CHECK -> Schedule these blocks for execution after all source code has been compiled. and then you would use the "use" statement to read in the module. The best place to put code to be used in many different programs is in a "Perl Module".The other blocks. Multiple BEGIN blocks are executed immediately in NORMAL declaration order. and perhaps they have some private data associated with them. If you had some handy code for modeling a dog. including normal code. Multiple CHECK blocks are scheduled to execute in REVERSE declaration order. do not execute until after the entire program has been compiled. normal code -> Schedule normal code to execute after all INIT blocks. Multiple ENDblocks are scheduled to execute in REVERSE declaration order. Perhaps you have some subroutines that are especially handy. Multiple INIT blocks are scheduled to execute in NORMAL declaration order. When anything other than a BEGIN block is encountered.pm.pm" extension. 72 of 138 . END { print "END 1\n" } CHECK { print "CHECK 1\n" BEGIN { print "BEGIN 1\n" INIT { print "INIT 1\n" } print "normal\n". but perl continues compiling the rest of the program. they are compiled and scheduled for exeuction. you might put it in a module called Dog. and then "use" that module. The Dog module declares its package namespace to be Dog. Dog. Once the Dog module has been used. Generally. etc back on. Dog. } 1. use Data::Dumper.pm package Dog. use warnings. which. use strict. # MUST BE LAST STATEMENT IN FILE All perl modules must end with "1.The content of a perl module is any valid perl code. The module then declares a subroutine called "speak". would have to be in the same directory. perl modules contain declarations. like any normal subroutine. It is standard convention that all perl modules start out with a "package" declaration that declares the package namespace to be the same as the module name. 73 of 138 . a file called script.pm did not return a true value\ 8 The use Statement The "use" statement allows a perl script to bring in a perl module and use whatever declarations have been made available by the module." otherwise you will get a compile error: SortingSubs.pl could bring in the Dog module like this: use Dog. Dog::Speak. such as subroutine declarations and possibly declarations of private or public variables. These declared subroutines can be called and public variables can be accessed by any perl script that uses the module. sub Speak { print "Woof!\n". Continuing our example.pm and script.pl. anyone can call the subroutine by calling Dog::Speak. After any new package declaration you will need to turn warnings. Here is an example of our Dog module. ends up in the current package namespace. > Woof! Both files. ###filename: Dog. If you want to create a subdirectory just for your modules. it will execute immediately after being compiled. These are all valid MODULENAMES: use use use use Dog. it would be a "/"." Module names with all upper case letters are just ugly and could get confused with built in words. though. Formally The "use" statement can be formally defined as this: use MODULENAME ( LISTOFARGS ). The "use" statement is exactly equivalent to this: BEGIN { require MODULENAME.pm 9. User created module names should be mixed case.'/home/username/perlmodules'). Pets::Dog. For Linux style systems.9 The use Statement. When performing the search for the module file. So perl would look for Pets::Dog::GermanShepard in Pets/Dog/ for a file called GermanShepard. you can add this subdirectory to @INC and perl will find any modules located there. Pets::Cat::Perian. Pets::Dog::GermanShepard. So this will not work: push(@INC. meaning it would be either a single identifier or multiple identifiers separated by double-colons. such as "use warnings. 74 of 138 . "require" will translate the doublecolons into whatever directory separator is used on your system. Module names with all lower case are reserved for built in pragmas. Because the "require" statement is in a BEGIN block. } The MODULENAME follows the package namespace convention. Perl will initialize this array to some default directories to look for any modules. use Dogs. The "require" statement is what actually reads in the module file.1 The @INC Array The "require" statement will look for the module path/file in all the directories listed in a global array called @INC. MODULENAME->import( LISTOFARGS ). note that the home directory symbol "~" is only meaningful in a linux shell. Perl does not understand it. for you Linux heads.4 The require Statement Once the require statement has found the module. Consult your shell documentation to determine how to set this environment variable. You could say something like this: BEGIN { push(@INC. This means any executable code gets executed. If you don't have PERL5LIB set.3 The PERL5LIB and PERLLIB Environment Variables The "require" statement also searches for MODULENAME in any directories listed in the environment variable called PERL5LIB. } use Dogs. The "use" statement will get compiled and execute immediately. Any code that is not a declaration will execute at this point. 9. So if you want to include a directory under your home directory. Just say something like this: use lib '/home/username/perlmodules'. 9. 9. perl will search for MODULENAME in any directory listed in the PERLLIB environment variable. use Dogs. the module gets executed immediately as well. so @INC will not be changed when "use" is called. The "glob" function uses the shell translations on a path. perl compiles it. The PERL5LIB variable is a colon separated list of directory paths. 75 of 138 . The MODULENAME->import statement is then executed.This is because the "push" statement will get compiled and then be scheduled for execution after the entire program has been compiled.2 The use lib Statement The "use lib" statement is my preferred way of adding directory paths to the @INC array. use lib glob('~/perlmodules'). before the "push" is executed. because it does not need a BEGIN block. Also. you will need to call "glob" to translate "~" to something perl will understand.'/home/username/perlmodules'). Because the require statement is in a BEGIN block. So. if your perl module declares a subroutine called "import" then it will get executed at this time.5 MODULENAME -> import (LISTOFARGS) The MODULENAME->import(LISTOFARGS) statement is a "method call". to be more accurate. one of the bells and whistles of a method call is a thing called "inheritance". instead of having to say: use Data::Dumper. 76 of 138 .9. Which is why you can say use Data::Dumper. The import method is a way for a module to import subroutines or variables to the caller's package. A subroutine called "Dumper" gets imported into your package namespace. which has not been introduced yet. Basically. This happens when you use Data::Dumper in your script. which has not been introduced yet. print Data::Dumper \@var. if your perl module OR ITS BASE CLASS(ES) declares a subroutine called "import" then it will get executed at this time. More advancedly. A method call is a fancy way of doing a subroutine call with a couple of extra bells and whistles bolted on. print Dumper \@var. sub Speak { print "Woof!\n". } sub import { warn "calling import". then calling ref($rarr) will return the string "ARRAY". #!/usr/local/bin/perl ###filename:script. my $str = ref($rarr). warn "just before use Dog". my $rarr = \@arr. with the following args $VAR1 = [ 'Dog'. and a reference. The bless() function is the basis for Object Oriented Perl./script. warn "executing normal code".9. } 1.pl line 6. calling import at Dog.pl use warnings. ###filename:Dog.pm line 4. warn "just after use Dog".3). just after use Dog at .pl line 4. just before use Dog at . use strict. > str is 'ARRAY' # referent # reference to referent # call ref() 77 of 138 . Dog::Speak. Quick reference refresher: Given an array referent. print "with the following args\n". @arr. print Dumper \@_. but bless() by itself is overwhelmingly simple. warn "str is '$str'".pm line 7. # MUST BE LAST STATEMENT IN FILE > > > > > > > > > > executing normal code at Dog. $rarr=\@arr.6 The use Execution Timeline The following example shows the complete execution timeline during a use statement.pm package Dog. use Data::Dumper. use strict.2. use warnings./script. my @arr=(1. use Data::Dumper. Woof! 10 bless() The bless() function is so simple that people usually have a hard time understanding it because they make it far more complicated than it really is. use Dog ('GermanShepard'). 'GermanShepard' ]. All bless does is change the string that would be returned when ref() is called. The bless function will then return the original reference.Normally. CODE at . ref() will return SCALAR./script./script.3)./script. SCALAR at . warn "str is '$str'"."Four"))."Action")). The bless function takes a reference and a string as input.pl line 8.ref(4). > str is 'Counter' # referent # reference to referent # call ref() Since bless() returns the reference. Action at ."Box")). '' at . warn warn warn warn warn > > > > > ref(\4).pl line 7. STRING. All bless() does is affect the value returned by ref(). ARRAY at .pl line 8./script. warn ref(bless([]./script. HASH. my $str = ref($rarr). ref(sub{}).pl line 6. 78 of 138 . CODE.pl line 7. ARRAY."Curlies")). The bless function modifies the referent pointed to by the reference and attaches the given string such that ref() will return that string. That is it. Note this is exactly the same as the code in the first example. my $rarr = \@arr."'". "Counter"). > > > > Four at . Curlies at .pl line 5. or empty-string depending on what type of referent it is referred to./script. bless REFERENCE.pl line 5.pl line 6. bless($rarr./script. ref({})./script. but with one line added to do the bless: my @arr=(1. ref([]). Here is an example of bless() in action.2. we can call ref() on the return value and accomplish it in one line: my $sca=4. "'". warn ref(bless(sub{}. HASH at . warn ref(bless(\$sca. Box at .pl line 4./script. warn ref(bless({}. however it was given a new name. In perl. > Woof So it is almost the same as using a package qualified subroutine name Dog::Speak. bless() changes the name of a referent. An invocant can be several different things. it goes into the current package namespace. So what is different? 79 of 138 . Dog -> Speak. but we had to do some handwaving to get beyond it. Here is the generic definition: INVOCANT -> METHOD ( LISTOFARGS ). If a religious figure took water and blessed it. sub Speak { print "Woof\n". It does not affect the contents of the referent.You might be wondering why the word "bless" was chosen. Or you can use the fully qualified package name. > Woof > Woof A method call is similar. The MODULENAME->import(LISTOFARGS) was a method call. then people would refer to it as "holy water". and because of that name it might be used differently. Dog::Speak. only the name returned by ref(). calling it a fancy subroutine call. } Speak. The INVOCANT is the thing that "invoked" the METHOD. the referent might be used differently or behave differently. The constitution and makeup of the water did not change. We would see this difference with method calls. Quick review of package qualified subroutine names: When you declare a subroutine. 11 Method Calls We have seen method calls before. You can call the subroutine using its short name if you are still in the package. and be guaranteed it will work every time. but the simplest thing it can be is just a bareword package name to go look for the subroutine called METHOD. package Dog. But because of this new name. > > > > > > > $VAR1 = [ 'Dog'. } } Dog -> Speak (3).. # 'Dog' my $count = shift(@_). use Data::Dumper. my $invocant = shift(@_). The second difference between a subroutine call and a method call is inheritance. some more useful than others. use warnings. Woof Woof Woof This may not seem very useful. but an INVOCANT can be many different things. 3 ]. 80 of 138 . # 3 for(1 . use strict. package Dog. the INVOCANT always gets unshifted into the @_ array in the subroutine call. sub Speak { print Dumper \@_. $count) { print "Woof\n".First. and called that subroutine. use Data::Dumper.pl use Shepard. use warnings. The specific breeds of dogs will likely be able to inherit some behaviours (subroutine/methods) from a base class that describes all dogs.. ###filename:Dog. sniff. Notice that script.pm package Dog. use base Dog. sniff". It did not find one. And script. my $count = shift(@_).1 Inheritance Say you want to model several specific breeds of dogs. Shepard->Speak(2). for(1 . Say we model a German Shepard that has the ability to track a scent better than other breeds.pl used Shepard. #!/usr/local/bin/perl ###filename:script. Even though perl ended up calling Dog::Speak. 81 of 138 .pl always used Shepard as the invocant for its method calls.pm line 8. The unexplained bit of magic is that inheritance uses the "use base" statement to determine what packages to inherit from. perl first looked in the Shepard namespace for a subroutine called Shepard::Speak.pm line 6. When script. Also notice that the subroutine Dog::Speak received an invocant of "Shepard". use strict. Woof at Dog. sub Speak { my $invocant = shift(@_). It then looked for a subroutine called Dog::Speak. sub Track { warn "sniff. So then it looked and found a BASE of Shepard called Dog.11. found one. Shepard->Track.pl called Shepard->Speak. sniff at Shepard. $count) { warn "Woof". ###filename:Shepard. warn "invocant is '$invocant'". not Dog. which was "Shepard" in this case. } 1.pm line 4. Shepard INHERITED the Speak subroutine from the Dog package. } } 1. But German Shepards still bark like all dogs.pm line 8.pm package Shepard. perl still passes Dog::Speak the original invocant. Woof at Dog. > > > > invocant is 'Shepard' at Dog. MODULENAME). but it is the way perl searches. The @ISA array contains any packages that are BASE packages of the current package. so you will want to learn it. therefore ISA. push(@ISA. The push(@ISA. left-to-right. you can change it with a module from CPAN. is functionally identical to this: BEGIN { require MODULENAME.MODULENAME) is new. Here is perl's default inheritance tree: 82 of 138 . The @ISA array is named that way because "Shepard" IS A "Dog". The search order is depth-first. If this approach does not work for your application. This is not necessarily the "best" way to search. When a method call looks in a package namespace for a subroutine and does not find one.11.pm using the search pattern that we described in the "use" section earlier. } The require statement goes and looks for MODULENAME. it will then go through the contents of the @ISA array.2 use base This statement: use base MODULENAME. 3 INVOCANT->isa(BASEPACKAGE) The "isa" method will tell you if BASEPACKAGE exists anywhere in the @ISA inheritance tree. # TRUE (Woof) Child->can("Track"). # TRUE Child->isa("Shepard").4 INVOCANT->can(METHODNAME) The "can" method will tell you if the INVOCANT can call METHODNAME successfully. # FALSE 11. # FALSE (can't track a scent) 83 of 138 . Shepard->can("Speak").Imagine a Child module has the following family inheritance tree: Perl will search for Child->Method through the inheritance tree in the following order: Child Father FathersFather FathersMother Mother MothersFather MothersMother 11. Child->isa("MothersMother"). "Dog". my $invocant=bless {}. ###filename:Dog.'Dog'.pl use Dog.pm line 8. it passes the original invocant. Well. If you called ref($invocant). Shepard->Track. we just need to speak different sentences now. Woof at Dog. } } 1. use Data::Dumper. my $count = shift(@_). The my $invocant=bless {}. to the method as the first argument. is the new line. 84 of 138 . invocant is 'Dog=HASH(0x8124394)' at Dog. When it finds the method. ### BLESSED INVOCANT $invocant->Speak(2). Perl allows a more interesting invocant to be used with method calls: a blessed referent. the anonmous hash. use warnings. {}. we already know all the grammar we need to know about Object Oriented Perl Programming. use strict.pm package Dog. warn "invocant is '$invocant'". Here is our simple Dog example but with a blessed invocant. Remember bless() changes the string returned by ref(). So perl uses "Dog" as its "child" class to begin its method search through the hierarchy tree. The bless part creates an anonymous hash. $count) { warn "Woof".pm line 6.11. In fact.pm line 8. it would return the string "Dog". since we have an anonymous hash passed around anyway. maybe we could use it to store some information about the different dogs that we are dealing with. Woof at Dog. for(1 . perl will call ref() on the reference and use the string returned as starting package namespace to begin searching for the method/subroutine. #!/usr/local/bin/perl ###filename:script. when using a reference as an invocant. sub Speak { my $invocant = shift(@_).5 Interesting Invocants So far we have only used bareword invocants that correspond to package names.. Well. and blesses it with the name "Dog". 13 Object Oriented Perl Object oriented perl does not use an implied "Computer" as the subject for its sentences. Then we create a Dog module that uses Animal as its base to get the contructor. set yourself to 5. the way the subject "you" is implied in the sentence: "Stop!" The verb and direct object of Picard's sentences become the subroutine name in proceedural programming. so we create an Animal. First. The method prints the name of the dog when they bark so we know who said what. Phasors. When you hear "procedural" think of "Picard". set warp drive to 5. set shields to "off". 85 of 138 . fire_weapons qw(phasers photon_torpedoes). puts the Name into a hash. and returns a reference to the hash. all the perl coding we have done has been "procedural" perl. Picard always gave the ship's computer commands in the form of procedural statements.12 Procedural Perl So far. We then add a method to let dogs bark. Instead. Warp Drive. Computer. the subject "computer" is implied. set_shield(0).pm perl module. as in Captain Picard of the starship Enterprise. This module contains one subroutine that takes the invocant and a Name. fire weapons: phasers and photon torpedoes. set_warp(5). Torpedoes. In procedural programming. our Dog module. it uses what was the direct object in the procedural sentences and makes it the subject in object oriented programming. set yourself to "off". But how would we code these sentences? Let's start with a familiar example. blesses the hash. Perhaps we are coding up an inventory system for a pet store. This subroutine is an object contructor. This return value becomes a "Animal" object. Whatever is left become arguments passed in to the subroutine call. we want a common way to handle all pets. Assume we want to keep track of several dogs at once. Shields. Computer. fire yourself. fire yourself. Computer. The subject of the sentences was always "Computer". warn "$name says Woof". it would be "Pet. #!/usr/local/bin/perl ###filename:script. speak for yourself".pm line 7.pm package Dog. } > Butch says Woof at Dog. The subject of the sentence is "Pet". 86 of 138 .pm package Animal. } # have every pet speak for themselves. Object Oriented Programming statements are of the form: $subject -> verb ( adjectives. This is object oriented programming. > Fluffy says Woof at Dog. etc ).pm line 7. adverbs. # create 3 Dog objects and put them in @pets array foreach my $name qw(Butch Spike Fluffy) { push(@pets.pm line 7. ###filename:Dog. } 1. rather than the implied "Computer" of procedural programming. } 1. > Spike says Woof at Dog. Notice the last foreach loop in script. Dog->New($name)). The script then goes through the array and calls the Speak method on each object. foreach my $pet (@pets) { $pet->Speak. ###filename:Animal.pl creates three dogs and stores them in an array.The script.$invocant). my $name=shift(@_). sub New { my $invocant=shift(@_). because if you translated that statement to English. return bless({Name=>$name}. use base Animal.pl use Dog. sub Speak { my $obj=shift. my @pets.pl says $pet->Speak. my $name=$obj->{Name}. Fluffy says Woof at Dog. Fang says Meow at Cat. ###filename:Cat. Notice how the last loop goes through all the pets and has each one speak for themselves. my $name=$obj->{Name}. Fluffy says Meow at Cat.pl use Dog. Dog->New($name)).pm line 7.pm line 7. the animal will say whatever is appropriate for its type. sub Speak { my $obj=shift. This is polymorphism.1 Class The term "class" is just an Object Oriented way of saying "package and module". #!/usr/local/bin/perl ###filename:script. Expanding our previous example. And each object just knows how to do what it should do. use base Animal.pm package Cat. The code processes a bunch of objects and calls the same method on each object.pm line 7. 13. use Cat. Whether its a dog or cat. } 1. } # create some cat objects foreach my $name qw(Fang Furball Fluffy) { push(@pets. } # polymorphism at work > > > > > > Butch says Woof at Dog. Spike says Woof at Dog. warn "$name says Meow".2 Polymorphism Polymorphism is a real fancy way of saying having different types of objects that have the same methods.13.pm line 7. #create some dog objects foreach my $name qw(Butch Spike Fluffy) { push(@pets. } # have all the pets say something. we might want to add a Cat class to handle cats at the pet store.pl to put some cats in the @pets array. Cat->New($name)).pm line 7. foreach my $pet (@pets) { $pet->Speak. Then we modify script. 87 of 138 .pm line 7. Furball says Meow at Cat. my @pets. use Data::Dumper. } 1. the only way that Shepard can call Dog's version of speak is to use a fully package qualified name Dog::Speak(). SUPER is a way of saying. If Shepard's version of Speak simply said $_[0]->Speak. Consider the big family tree inheritance diagram in the "use base" section of this document (the one with Child as the root. and has a Speak method that growls. > Spike says Grrr at Shepard. ###filename:Dog. "look at my ancestors and call their version of this method. use base Dog. 88 of 138 . > Spike says Woof at Dog. But this scatters hardcoded names of the base class throughout Shepard.pm line 8.3 SUPER SUPER:: is an interesting bit of magic that allows a child object with a method call that same method name in its parent's class.13." There are some limitations with SUPER.pm line 6. sub New { return bless({Name=>$_[1]}. and then calls its ancestor's Dog version of speak. warn "$name says Woof". warn "$name says Grrr".pm package Dog. ###filename:Shepard. $dog->Speak. The Shepard module uses Dog as its base. and FathersFather. $_[0]->SUPER::Speak. To shorten the example.pl use warnings. FathersMother.pm. Without the magic of SUPER. it would get into an infinitely recursive loop. } sub Speak { my $name=$_[0]->{Name}. Back to the dogs. use Shepard.$_[0]).pm package Shepard. which would make changing the base class a lot of work. } 1. ### SUPER #!/usr/local/bin/perl ###filename:script. the constructor New was moved to Dog. my $dog=Shepard->New("Spike"). sub Speak { my $name=$_[0]->{Name}. use strict. etc as grandparents). Father and Mother as parents. This means if the method you wanted FathersFather to call was in MothersMother. For example. Rather than modify the original code to do what you want it to. and then call SUPER::ContributeToWedding. it is legitimate to have what you might consider to be "universal" methods that exist for every class. Many times a class might exist that does ALMOST what you want it to do. MothersMother was a complete stranger he would not meet for years to come. built-in way to do this in perl. SUPER does have its uses. you could instead create a derived class that inherits the base class and rewrite the method to do what you want it to do. could be considered a bad thing.Imagine an object of type "Child". and every base class has a method called "ContributeToWedding". SUPER looks up the hierarchy starting at the class from where it was called. though. then SUPER will not work. the only modules that will get looked at is "FathersFather" and "FathersMother". The FatherOfTheBride would pay for the wedding. and so on and so forth. since you would assume that Father was designed only knowing about FathersFather and FathersMother. every class could do its part. SUPER will do the trick. Unfortunately. So designing Father to rely on his future. the FatherOfTheGroom would pay for the rehersal dinner. have-not-even-met-her-yet mother-in-law. This could be considered a good thing. When Father was coded.pm" module available on CPAN. If "Father" has a method called Speak and that method calls SUPER::Speak. In that case. Instead of a class called "Child" imagine a class called "CoupleAboutToGetMarried". I will refer you to the "NEXT. there is no easy. you might want a method that calls its parent method but then multiplies the result by minus one or something. However. In cases like this. 89 of 138 . } 1. Just prior to deleting the object and any of its internal data. The NEXT.pm line 7. perl will call the DESTROY method on the object. $dog=undef. sub New { return bless({Name=>$_[1]}. With an object that has a complex hierarchical family tree. then Mother's not going to handle her demise properly and you will likely have ghosts when you run your program.13." has been sold"). ###filename:Dog. perl silently moves on and cleans up the object data. my $dog=Dog->New("Spike"). use strict.4 Object Destruction Object destruction occurs when all the references to a specific object have gone out of lexical scope. If no such method exists. 90 of 138 . use Data::Dumper. The module can be designed for procedural programming or object oriented programming (OO). If Mother and Father both have a DESTROY method. 14 Object Oriented Review 14.1 Modules The basis for code reuse in perl is a module.pm module on CPAN also solves this limitation. perl will only call the FIRST method of DESTROY that it finds in the ancestry. #!/usr/local/bin/perl ###filename:script.$_[0]). use Dog. The DESTROY method has similar limitations as SUPER. A perl module is a file that ends in ".pl use warnings.pm" and declares a package namespace that (hopefully) matches the name of the file.pm package Dog. and the object is scheduled for garbage collection. then the module is sometimes referred to as a "class". > Spike has been sold at Dog. If the module is OO. } sub DESTROY { warn (($_[0]->{Name}). and different instances had Name values of "Fluffy". The best way of calling a constructor is to use the arrow operator: my $object = Classname->Constructor(arguments). it will provide subroutine declarations that can be called like normal subroutines. 14. it can be used as an object. and the reference is a handy way of passing the object around. and the Values correspond to the attribute values specific to that particular instance. perform operations. this would look like: my $pet = Animal->New('Spike').2 use Module The code in a perl module can be made available to your script by saying "use MODULE. A method is simply a subroutine that receives a reference to the instance variable as its first argument. 14. Constructors are subroutines that create a reference to some variable. The name of the constructor can be any valid subroutine name. Any double-colons (::) in the module name being used get converted to a directory separator symbol of the operating system. Methods should be thought of as "actions" to be performed on the instance." The "use" statement will look for the module in the directories listed in the PERL5LIB environment variable and then the directories listed in the @INC variable. and then return the reference. it will declare subroutines that will be used as constructors or methods. 91 of 138 .4 Methods Once an instance of an object has been constructed. use the reference to bless the variable into the class. The best way to add directories to the @INC variable is to say use lib "/path/to/dir".pm class. Keys to the hash correspond to object attribute names. "Spike". but is usually "new" or "New". methods can be called on the instance to get information. In the Animal. and so on. The object is usually a hash.14. Once its referent is blessed. etc. one key was "Name". change values. If the module is designed for object oriented use. or "verbs" in a sentences with instances being the "subject". In the Animal.3 bless / constructors If the module is designed for procedural programming.pm module example. there is a "h2xs" utility. 14. this would look like: $pet->Speak.In the above examples. 92 of 138 .pm example. the Cat and Dog classes both inherit from a common Animal class. There is a CPAN website. the GermanShepard derived class overrode the Dog base class Speak method with its own method. If a derived class overrides a method of its base class. which contains a plethora of perl module for anyone to download. then a derived class can override the base class method by declaring its own subroutine called MethodName. The preferred way of calling a method is using the arrow method. which automates the creation of a module to be uploaded to CPAN. Both Dog and Cat classes inherit the constructor "New" method from the Animal base class. In one example above. 15 CPAN CPAN is an acronym for "Comprehensive Perl Archive Network". The class inheriting from a base class is called a "derived class". In the Animal. The only way to accomplish this is with the SUPER:: pseudopackage name. In the examples above. And finally. "Speak" was a method that used the Name of the instance to print out "$name says woof". To have a class inherit from a base class. which provides a shell interface for automating the downloading of perl modules from the CPAN website.6 Overriding Methods and SUPER Classes can override the methods of their base classes. $ObjectInstance -> Method ( list of arguments ). it may want to call its base class method. There is also a CPAN perl module. use the "use base" statement. This allows many similar classes to put all their common methods in a single base class. use base BaseClassName. which is "Plain Old Documentation" embedded within the perl modules downloaded from CPAN. A "perldoc" utility comes with perl that allows viewing of POD. The GermanShepard method named Speak called the Dog version of Speak by calling: $obj->SUPER::Speak. If a base class contains a method called "MethodName". 14.5 Inheritance Classes can inherit methods from base classes. 60 From here. if available. Lynx is a text-based webbrowser. If the README looks promising.pm will want you to have lynx installed on your machine. which will be a file with a tar. Here is the standard installation steps for a module. which is an Internet file transfer program for scripts. and most importantly.) > gunzip NEXT-0.pm file to your home directory (or anywhere you have read/write priveledges). you can view the README file that is usually available online before actually downloading the module. then download the tarball.gz > tar -xf NEXT-0. Once you find a module that might do what you need.tar. The "exit" step is shown just so you remember to log out from root. most modules install with the exact same commands: > > > > > > perl Makefile. Install these on your system before running the CPAN module.. you give it the name of a module.60. This example is for the NEXT. CPAN.tar. untars it. 15. then you can copy the . and it downloads it from the web. it will install any dependencies for you as well.PL make make test su root make install exit The "make install" step requires root priveledges.60.60.15.2 CPAN.pm file).cpan. The Perl Module The CPAN.pm module is a module that automates the installation process. source code to install perl. 93 of 138 . and installs it for you. FAQs.org CPAN contains a number of search engines. More interestingly.pm will also want ncftpget. The Web Site CPAN is a website that contains all things perl: help. and then set your PERL5LIB environment variable to point to that directory.pm module. CPAN. If you install a module that requires a separate module.1 CPAN. a plethora of perl module so that you can re-use someone else's code. which is contained in NEXT-0.gz extension.tar > cd NEXT-0. it installs both modules for you with one command. If you do not have root priveledges and the module is pure perl (just a .gz (you should use the latest version. pm module with all the trimmings.pm module is run from a shell mode as root (you need root priviledges to do the install). > su root > perl -MCPAN -e shell The first time it is run.The CPAN.pm module shell. The first thing you will probably want to do is make sure you have the latest and greatest cpan. Most can be answered with the default value (press <return>). Change to root. and then run the CPAN. Here is a log from the CPAN being run for the first time on my machine: Are you ready for manual configuration? [yes] CPAN build and cache directory? [/home/greg/.de:1404] cpan> That last line is actually the cpan shell prompt.PL? [0] Your ftp_proxy? Your http_proxy? Your no_proxy? Select your continent (or several nearby continents) [] > 5 Select your country (or several nearby countries) [] > 3 Select as many URLs as you like. it will go directly to the cpan shell prompt. The next time you run cpan.. type the following command: cpan> install Bundle::CPAN cpan> reload cpan 94 of 138 . separated by blanks [] > 3 5 7 11 13 Enter another URL or RETURN to quit: [] Your favorite WAIT server? [wait://ls6-] Cache size for build directory (in MB)? [10] Perform cache scanning (atstart or never)? [atstart] Cache metadata (yes/no)? [yes] Your terminal expects ISO-8859-1 (yes/no)? [yes] Policy on building prerequisites (follow.uni-dortmund. it will ask you a bunch of configuration questions.informatik. put them on one line. At the cpan shell prompt. cpan. 95 of 138 .PL make make test make dist # # # # # # # # # # create the structure go into module dir edit module go into test dir edit test go to make directory create make file run make run the test in t/ create a . From this point forward. You do not need to know POD to write perl code. The remainder of the commands show you what steps you need to take to create a . any new topics may include a discussion of a perl builtin feature or something available from CPAN.t cd .pm cd . Some of perl's builtin features are somewhat limited and a CPAN module greatly enhances and/or simplifies its usage..tar. If cpan encounters any problems. cpan> force install Tk Or you can get the tarball from www. This first command will create a directory called "Animal" and populate it with all the files you need. > perldoc perldoc > perldoc NEXT > perldoc Data::Dumper 15. among other things.3 Plain Old Documentation (POD) and perldoc Modules on CPAN are generally documented with POD. > > > > > > > > > > h2xs -X -n Animal cd Animal/lib edit Animal.. But you will want to know how to read POD from all the modules you download from CPAN. Perl comes with a builtin utility called "perldoc" which allows you to look up perl documentation in POD format. perl comes with a utility "h2xs" which./t edit Animal. will create a minimal set of all the files needed.You should then be able to install any perl module with a single command.tar.org and install it manually.4 Creating Modules for CPAN with h2xs If you wish to create a module that you intend to upload to CPAN. it will not install the module. If you wish to force the install. 15.gz file 16 The Next Level You now know the fundamentals of procedural perl and object oriented perl. then use the "force" command. perl Makefile.pm module. POD embeds documentation directly into the perl code.gz tarball that can be uploaded to CPAN and downloaded by others using the cpan. A "-in" switch might be followed by a filename to be used as input. such as "-h" to get help.pl -f optionfile.pl -v Switches might by followed by an argument associated with the switch. The "--" argument could be used to indicate that arguments before it are to be used as perl arguments and that arguments after it are to be passed directly to a compiler program that the script will call. > anotherscript. > thisscript.pl -h A "-v" switch might turn verboseness on. you could say "-xyz".txt 96 of 138 . > thatscript. Options operate independent of spacing.txt" would be easier than putting those options on the command line each and every time that function is needed.pl -in data. and using "-f optionfile.+define+FAST More advanced arguments: Single character switches can be lumped together.txt -v Options can have full and abreviated versions to activate the same option. > myscript.17 Command Line Arguments Scripts are often controlled by the arguments passed into it via the command line when the script is executed. Instead of "-x -y -z".txt An argument could indicate to stop processing the remaining arguments and to process them at a later point in the script.pl -in test42. A file might contain a number of options always needed to do a standard function.txt -f = options. This would allow a user to use the option file and then add other options on the command line. -f=options. Options can be stored in a separate file. > mytest. Arguments might be a switch.txt -. The option "-verbose" can also work as "-v". txt'.17. > '-f'./test.pl" > $VAR1 = [ > '*.pl > $VAR1 = [ > 'script. > 'options. > 'test. except for quoted strings. 97 of 138 .pl "*.1 @ARGV Perl provides access to all the command line arguments via a global array @ARGV.txt then you will need to put it in quotes or your operating system will replace it with a list of files that match the wildcard pattern.pl *.pl print Dumper \@ARGV.txt "hello world" > $VAR1 = [ > '-v'.pl -v -f options.pl'. If you need to pass in a wild-card such as *. ###filename:script. > .pl' > ]. > 'hello world' > ]. > .pl' > ]./test. ###filename:script. > script.pl print Dumper \@ARGV. The text on the command line is broken up anywhere whitespace occurs. If you need to handle a few simple command line arguments in your script. } } Package (our) variables are used instead of lexical (my) variables because command line arguments should be considered global and package variables are accessible everywhere. process it.} elsif($arg eq '-f') {$fn = shift(@ARGV). then the above example of processing @ARGV one at a time is probably sufficient. code would check for $main::debug being true or false. 98 of 138 . if($arg eq '-h') {print_help. If you need to support more advanced command line arguments.pl script. Package "main" is the default package and a good place to put command line argument related variables. or a lot of simple ones. If you wrote a script that used "-d" to turn on debug information. and move on to the next argument. # create subroutines for options that do things sub print_help { print "-d for debug\n-f for filename\n". our $debug=0. The following example will handle simple command line arguments described above.} else { die "unknown argument '$arg'".} elsif($arg eq '-d') {$debug=1. Outside of package main. you should look at Getopt::Declare. you would want this to turn on debugging in any separate modules that you write. not just your script. while(scalar(@ARGV)) { my $arg=shift(@ARGV).} # get the first/next argument and process it. # create a package variable for each option our $fn.Simple processing of @ARGV would shift off the bottom argument. but will still be recognized if placed on the command line (secret arguments) 99 of 138 . Getopt::Declare recognizes -v as a version command line argument. our $debug=0. The <unknown> marker is used to define what action to take with unknown arguments.} -d Turn debugging On {$main::debug=1. it will print out the version of the file based on $main::VERSION (if it exists) and the date the file was saved. my $args = Getopt::Declare->new( q ( -in <filename> Define input file {$main::fn=$filename. The text passed into Getopt::Declare is a multi-line quoted q() string.2 Getopt::Declare The Getopt::Declare module allows for an easy way to define more complicated command line arguments. not that the starting curly brace must begin on its own line separate from the argument declaration and description.17. then that argument will not be listed in the help text. This is because Getopt::Declare recognizes -h to print out help based on the declaration.} is treated as executable code and evaled if the declared argument is encountered on the command line. > script. The text within the curly braces {$main::fn=$filename. Also. the global variables $fn and $debug are used with their full package name.-f.pl -h >Options: > > -in <filename> > -d Define input file Turn debugging On The argument declaration and the argument description is separated by one or more tabs in the string passed into Getopt::Declare. print "fn is '$fn'\n". If -v is on the command line. This string defines the arguments that are acceptable. arguments that are not defined in the declaration. print "Debug is '$debug'\n".} <unknown> { die "unknown arg '$unknown'\n". The above example with -h. } )). and -d options could be written using Getopt::Declare like this: use Getopt::Declare. If [undocumented] is contained anywhere in the command line argument description. Note that within the curly braces. Notice that -h is not defined in the string. our $fn=''. Argument files can refer to other argument files. Additional arguments can be placed in an external file and loaded with -args filename. An undocumented -Dump option turns on dumping. Normally. 5. Filenames to be handled by the script can be listed on the command line without any argument prefix 2. and any action associated with it. 4. The Error() subroutine will report the name of the file containing any error while processing the arguments. allowing nested argument files. Any arguments that are not defined will generate an error. Any options after a double dash (--) are left in @ARGV for later processing. The example on the next page shows Getopt::Declare being used more to its potential. A short and long version of the same argument can be listed by placing the optional second half in square brackets: -verb[ose] This will recognise -verb and -verbose as the same argument. 1.A [ditto] flag in the description will use the same text description as the argument just before it. 100 of 138 . place [repeatable] in the description. Verbosity can be turned on with -verbose. calling finish() with a true value will cause the command line arguments to stop being processed at that point. a declared argument will only be allowed to occur on the command line once. If the argument can occur more than once. 6. 3. It can be turned off with -quiet. Inside the curly braces. }} sub Error {die"Error: ". sub Verbose {if($verbose){print $_[0].} <unknown> Filename [repeatable] { if($unknown!~m{^[-+]}) {push(@main::files.} } ). $main::arg_file = $input.} -verb[ose] Turn verbose On {$main::verbose=1. $verbose=0.($_[0]).} -h Print Help {$main::myparser->usage. } main::Verbose ("finished parsing '$input'\n").} -quiet Turn verbose Off {$main::verbose=0. 101 of 138 . $VERSION=1. $debug=0. $myparser->parse().$unknown). $main::myparser->parse([$input]). { local($main::arg_file).} else {main::Error("unknown arg '$unknown'"). # -v will use this @files. main::Verbose ("returning to '$main::arg_file'\n").01. $dump=0.} main::Verbose("Parsing '$input'\n"). our $myparser = new Getopt::Declare ($grammar.# advanced command line argument processing use our our our our our our Getopt::Declare. } -d Turn debugging On {$main::debug=1.} --Dump [undocumented] Dump on {$main::dump=1. $arg_file='Command Line'.} -Argument separator {finish(1).} my $grammar = q( -args <input> Arg file [repeatable] { unless(-e $input) {main::Error("no file '$input'"). " from '$arg_file'\n".['-BUILD']). is the first character of the string. Create if non-existing. '>>' Append. write.e.18 File Input and Output Perl has a number of functions used for reading from and writing to files. If the first argument to open() is an undefined scalar. Do not create. Do not clobber existing file. The second argument to open() is the name of the file and an optional flag that indicates to open the file as read. If no flag is specified. 102 of 138 . The valid flags are defined as follows: '<' Read.txt') or die "Could not open file". or append. If the filehandle is stored in a scalar. the loop reads the filehandle a line at a time until the end of file is reached. <$filehandle> When used as the boolean conditional controlling a while() loop. 18. This is available in perl version 5. you can close a filehandle by calling the close function and passing it the filehandle. close($filehandle) or die "Could not close". The angle operator is the filehandle surrounded by angle brackets. you can read from a filehandle a number of ways. open(my $filehandle.6 and later and is the preferred method for dealing with filehandles. the file defaults to being opened for read. and the scalar goes out of scope or is assigned undef. use the open() function. then perl will automatically close the filehandle for you. 18. The filename is a simple string. Each pass through the conditional test reads the next line from the file and places it in the $_ variable. if present. '>' Write. Create if non-existing. perl will create a filehandle and assign it to that scalar. 18. 'filename. All file IO revolves around file handles.3 read Once open.1 open To generate a filehandle and attach it to a specific file. Do not clobber existing file. Clobber if already exists.2 close Once open. and the flag. DEFAULT. The most common is to read the filehandle a line at a time using the "angle" operator. i. } The above example is useful if every line of the file contains the same kind of information. chomp($line). This will read the next line from the filehandle. Another way to read from a filehandle is to assign the angle operator of the filehandle to a scalar.txt') or die "could not open". you can wrap it up in a subroutine to hide the error checking. This is useful if you are reading a file where every line has a different meaning. my $addr=nlin.txt'). and you are just going through each line and parsing it. and you need to read it in a certain order. close $fh. 18. my $line = $_. print $fh "once\n". To make a little more useful. '>output. } my $name=nlin. otherwise you hit the end of file. age: $age\n". address: $addr.This script shows a cat -n style function. sub nlin { defined(my $line = <$fh>) or croak "premature eof". 'input. open (my $fh. while(<$fh>) { $num++. But it requires that you test the return value to make sure it is defined. defined(my $line = <$fh>) or die "premature eof".txt'). chomp($line). print $fh "upon\n". print "Name: $name. You can then call the subroutine each time you want a line. print "$num: $line\n". use Carp.4 write If the file is opened for write or append. open (my $fh. you can write to the filehandle using the print function. my $num=0. open (my $fh. 103 of 138 . return $line. my $age =nlin. print $fh "a time\n". 'input. use File::Find. my $startup_file = glob('~/.7 File Tree Searching For sophisticated searching.6 File Globbing The glob() function takes a string expression and returns a list of files that match that expression using shell style filename expansion and translation.5 File Tests Perl can perform tests on a file to glean information about the file. chomp($pwd). including searches down an entire directory structure. my $pwd=`pwd`. 18. my @files = glob ( STRING_EXPRESSION ).6. cannot open "~/list.$pwd).18. use the File::Find module. Some common tests include: • • • • • • • • • • -e -f -d -l -r -w -z -p -S -T file file file file file file file file file file exists is a plain file is a directory is a symbolic link is readable is writable size is zero is a named pipe is a socket is a text file (perl's definition) 18. For example. } 104 of 138 . All tests return a true (1) or false ("") value about the file. if you wish to get a list of all the files that end with a .cshrc'). sub process { . FILE is a filename (string) or filehandle.1. Perl's open() function.txt"). for example. It is included in perl 5.. find(\&process.txt" because "~" only means something to the shell. This is also useful to translate Linux style "~" home directories to a usable file path. The "~" in Linux is a shell feature that is translated to the user's real directory path under the hood. All the tests have the following syntax: -x FILE The "x" is a single character indicating the test to perform.. use glob(). To translate to a real directory path.txt expression: my @textfiles = glob ("*. 1 The system() function If you want to execute some command and you do not care what the output looks like. a non-zero indicates some sort of error. The system() function executes the command string in a shell and returns you the return code of the command. When you execute a command via the system() function. the path to the file is available in $File::Find::dir and the name of the file is in $_. which means the user will see the output scroll by on the screen.txt".} if((-d $fullname) and ($fullname=~m{CVS})) {$File::Find::prune=1. and then it is lost forever.system("command string"). The package variable $File::Find::name contains the name of the current file or directory.} } For more information: perldoc File::Find 19 Operating System Commands Two ways to issue operating system commands within your perl script are: 1.The process() subroutine is a subroutine that you define. If your process() subroutine sets the package variable $File::Find::prune to 1. the output of the command goes to STDOUT. This process() subroutine prints out all . In Linux.txt files encountered and it avoids entering any CVS directories.txt$}) {print "found text file $fullname\n". you might do something like this: my $cmd = "rm -f junk.backticks `command string`. a return value of ZERO is usually good. Your process() subroutine can read this variable and perform whatever testing it wants on the fullname. if ($fullname =~ m{\. 105 of 138 . then you will likely want to use the system() function. sub process { my $fullname = $File::Find::name. return. If you process() was called on a file and not just a directory. # returns the return value of command 2. you just want to know if it worked. # returns STDOUT of command 19. So to use the system() function and check the return value. then find() will not recurse any further into the current directory. The process() subroutine will be called on every file and directory in $pwd and recursively into every subdirectory and file below. system($cmd)==0 or die "Error: could not '$cmd'". use the backtick operator. We have not covered the GUI toolkit for perl (Tk).2 The Backtick Operator If you want to capture the STDOUT of a operating system command. then you will want to use the backtick operator. If you call system() on the finger command. but I prefer to use m{} and s{}{} because they are clearer for me. 19. find out what matched the patterns.3 Operating System Commands in a GUI If your perl script is generating a GUI using the Tk package.substitute 3. and substitute the matched patterns with new strings. There are two ways to "bind" these operators to a string expression: 1. If you want to capture what goes to STDOUT and manipulate it within your script. you can search strings for patterns. you should look into Tk::ExecuteCommand. my $string_results = `finger username`. You can then process this in your perl script like any other string. A simple example is the "finger" command on Linux. If you type: linux> finger username Linux will dump a bunch of text to STDOUT.!~ pattern does match string expression pattern does NOT match string expression 106 of 138 . The most common delimiter used is probably the m// and s/// delimiters.transliterate m{PATTERN} s{OLDPATTERN}{NEWPATTERN} tr{OLD_CHAR_SET}{NEW_CHAR_SET} Perl allows any delimiter in these operators. using the Tk::ExecuteCommand module. With regular expressions. This is a very cool module that allows you to run system commands in the background as a separate process from your main perl script. The $string_results variable will contain all the text that would have gone to STDOUT. 20 Regular Expressions Regular expressions are the text processing workhorse of perl. such as {} or () or // or ## or just about any character you wish to use. but if you are doing system commands in perl and you are using Tk. The module provides the user with a "Go/Cancel" button and allows the user to cancel the command in the middle of execution if it is taking too long.=~ 2. There are three different regular expression operators in perl: 1.match 2. all this text will go to STDOUT and will be seen by the user when they execute your script. there is a third way to run system commands within your perl script.19. print "decrypted: $love_letter\n".Binding can be thought of as "Object Oriented Programming" for regular expressions. # upgrade my car my $car = "my car is a toyota\n". where "verb" is limited to 'm' for match. if($email =~ m{Free Offer}) { $email="*deleted spam*\n". adverbs. print "encrypted: $love_letter". and 'tr' for translate. } print "$email\n". $love_letter =~ tr{A-Za-z}{N-ZA-Mn-za-m}. Caesar cypher my $love_letter = "How I love thee. The above examples all look for fixed patterns within the string. This is functionally equivalent to this: $_ =~ m/patt/. > > > > > > *deleted spam* my car is a jaguar encrypted: Ubj V ybir gurr. Binding in Regular Expressions can be looked at in a similar fashion: $string =~ verb ( pattern ). etc ). decrypted: How I love thee. Generic OOP structure can be represented as $subject -> verb ( adjectives. 's' for substitution. print "$car\n". 107 of 138 . $car =~ s{toyota}{jaguar}. You may see perl code that simply looks like this: /patt/.\n". Here are some examples: # spam filter my $email = "This is a great Free Offer\n". $love_letter =~ tr{A-Za-z}{N-ZA-Mn-za-m}. # simple encryption. Regular expressions also allow you to look for patterns with different types of "wildcards". > My car is a Jaguar 108 of 138 . my $car = "My car is a Toyota\n".20. subjecting the pattern to one pass of variable interpolation as if the pattern were contained in double-quotes. my $actual = "Toyota". $car =~ s{$actual}{$wanted}. print $car.1 Variable Interpolation The braces that surround the pattern act as double-quote marks. my $wanted = "Jaguar". This allows the pattern to be contained within variables and interpolated during the regular expression. <<"MARKER" filename: output.dat size: 512 filename: address. yet the pattern matched.1024 > input. each containing the pattern {filename: } followed by one or more non-whitespace characters forming the actual filename. Each line also contains the pattern {size: } followed by one or more digits that indicate the actual size of that file. 109 of 138 . foreach my $line (@lines) { #################################### # \S is a wildcard meaning # "anything that is not white-space".txt size: 1024 filename: input. # the "+" means "one or more" #################################### if($line =~ m{filename: (\S+)}) { my $name = $1. It can contain wildcards such as {\d}. ########################### $line =~ m{size: (\d+)}. my @lines = split "\n". } } > output.1048576 20. my $size = $1. ########################### # \d is a wildcard meaning # "any digit. 0-9".$size\n". the parenthesis were in the pattern but did not occur in the string.db size: 1048576 MARKER . Notice in the above example.20.2 Wildcard Example In the example below. we process an array of lines.db. print "$name.3 Defining a Pattern A pattern can be a literal pattern such as {Free Offer}.dat.512 > address.txt. It can also contain metacharacters such as the parenthesis. " . The following are metacharacters in perl regular expression patterns: \ | ( ) [ ] { } ^ $ * + ? . hello and goodday! " dogs and cats and sssnakes put me to sleep. (somewhat faster) .. no capturing. Experiment with what matches and what does not match the different regular expression patterns. alternation: (patt1 | patt2) means (patt1 OR patt2) grouping (clustering) and capturing | ( ) (?: ) grouping (clustering) only. Hummingbirds are ffffast. then match that character class.20. my .. \ (backslash) if next character combined with this backslash forms a character class shortcut. any single character # match 'cat' 'cbt' 'cct' 'c%t' 'c+t' 'c?t' ." ." zzzz.} # .} # () grouping and capturing # match 'goodday' or 'goodbye' if($str =~ m{(good(day|bye))}) {warn "group matched.t}){warn "period".} 110 of 138 .4 Metacharacters Metacharacters do not get interpreted as literal characters. Change the value assigned to $str and re-run the script. # | alternation # match "hello" or "goodbye" if($str =~ m{hello|goodbye}){warn "alt". If not a shortcut. John". if($str =~ m{c." $str = "Dear sir. Instead they tell perl to interpret the metacharacter (and sometimes the characters around metacharacter) in a different way. match any single character (usually not "\n") [ ] * + ? { } ^ $ define a character class. " Sincerely. captured '$1'". then simply treat next character as a non-metacharacter. match any single character in class (quantifier): match previous item zero or more times (quantifier): match previous item one or more times (quantifier): match previous item zero or one time (quantifier): match previous item a number of times in given range (position marker): beginning of string (or possibly after "\n") (position marker): end of string (or possibly before "\n") Examples below. . period at . match previous item one or more # match 'snake' 'ssnake' 'sssssssnake' if($str =~ m{s+nake}){warn "plus sign". match previous item zero or more # match '' or 'z' or 'zz' or 'zzz' or 'zzzzzzzz' if($str =~ m{z*}){warn "asterisk".... caret at . Normally..... group matched... previous item is optional # match only 'dog' and 'dogs' if($str =~ m{dogs?}){warn "question". and 'fffffast' if($str =~ m{f{3.. The expression "2 + 3 * 4" does the multiplication first and then the addition.. yielding the result of "20".. 111 of 138 . multiplication has a higher precedence than addition. 20.. asterisk at . match previous. matches end of string # match 'John' only if it occurs at end of string if($str =~ m{John$}){warn "dollar". yielding the result of "14". 'ffffast'.# [] define a character class: 'a' or 'o' or 'u' # match 'cat' 'cot' 'cut' if($str =~ m{c[aou]t}){warn "class".. plus sign at .} # ^ position marker. captured 'goodday' at . matches beginning of string # match 'Dear' only if it occurs at start of string if($str =~ m{^Dear}){warn "caret". The expression "(2 + 3) * 4" forces the addition to occur first.5}ast}){warn "curly brace". class at .} # ? quantifier.} # * quantifier. question at .5 Capturing and Clustering Parenthesis Normal parentheses will both cluster and capture the pattern they contain...} # {} quantifier...} # + quantifier.. Clustering affects the order of evaluation similar to the way parentheses affect the order of evaluation within a mathematical expression. 3 <= qty <= 5 # match only 'fffast'.} > > > > > > > > > > alt at .} # $ position marker. curly brace at . dollar at . the pattern would match "goodday" or "bye". print "Hello. In the below example.} Cluster-only parenthesis don't capture the enclosed pattern. so we use cluster-only parentheses. The pattern {cats?} will apply the "?" quantifier to the letter "s". Clustering parentheses will also Capture the part of the string that matched the pattern within parentheses. starting at 1. } > You said goodday to John 112 of 138 . sometimes you just want to cluster without the overhead of capturing. . so we do not need to capture the "day|bye" part of the pattern.. we want to cluster "day|bye" so that the alternation symbol "|" will go with "day" or "bye". will contain the values from the capturing parentheses. captured '$1'". ########################################## # $1 $2 if($test =~ m{(good(?:day|bye)) (\w+)}) { print "You said $1 to $2\n". The pattern {(cats)?} will apply the "?" quantifier to the entire pattern within the parentheses. and they don't count when determining which magic variable. variables. John Smith Because capturing takes a little extra time to store the captured result into the $1. my $test="Firstname: John Lastname: Smith". $3. my $last = $2. rather than "goodday" or "goodbye".... ############################################ # $1 $2 $test=~m{Firstname: (\w+) Lastname: (\w+)}. $1. matching either "cat" or "cats". $2. matching "cats" or null string.Clustering parentheses work in the same fashion.. Without the clustering parenthesis. my $test = 'goodday John'. $3 . my $first = $1. .. $first $last\n". $2. $2. Each left parenthesis increments the number used to access the captured string.. if($str =~ m{(good(?:day|bye))}) {warn "group matched. The captured values are accessible through some "magical" variables called $1. The left parenthesis are counted from left to right as they occur within the pattern. > Hello. The pattern contains capturing parens around the entire pattern. Characters classes have their own special metacharacters.d. [aeiouAEIOU] [0123456789] any vowel any digit 20. and unicode characters. inclusively." metacharacter will match any single character.20. This is not the same as "any consonant". Perl will then match any one character within that class. shortcut \d \D \s \S \w \W class [0-9] [^0-9] [ \t\n\r\f] [^ \t\n\r\f] [a-zA-Z0-9_] [^a-zA-Z0-9_] any digit any NON-digit any whitespace any NON-whitespace any word character (valid perl identifier) any NON-word character description 113 of 138 .7 Shortcut Character Classes Perl has shortcut character classes for some more common classes. 20. Character ranges are based off of ASCII numeric values. You can easily define smaller character classes of your own using the square brackets []. This is equivalent to a character class that includes every possible character. The class [^aeiou] will match punctuation. Warning: [^aeiou] means anything but a lower case vowel.c.f. all previously defined metacharacters cease to act as metacharacters and are interpreted as simple literal characters.b.1 Metacharacters Within Character Classes Within the square brackets used to define a character class.6. Whatever characters are listed within the square brackets are part of that character class. ^ If it is the first character of the class. numbers. [a-f] indicates the letters a. \ (backslash) demeta the next character (hyphen) Indicates a consecutive character range.6 Character Classes The ".e. then this indicates the class is any character EXCEPT the ones in the square brackets. By default. max}? match zero or more times (match as little as possible and still be true) match one or more times (match as little as possible and still be true) match at least min times (match as little as possible and still be true) match at least "min" and at most "max" times (match as little as possible and still be true) This example shows the difference between minimal and maximal quantifiers. quantifiers are "greedy" or "maximal". *? +? {min. } {min. } > greedy '1234000' > thrifty '1234' 114 of 138 .max} match zero or more times (match as much as possible) match one or more times (match as much as possible) match zero or one times (match as much as possible) match exactly "count" times match at least "min" times (match as much as possible) match at least "min" and at most "max" times (match as much as possible) 20. making them "non-greedy". if($string =~ m{^(\d+)0+$}) { print "greedy '$1'\n".8 Greedy (Maximal) Quantifiers Quantifiers are used within regular expressions to indicate how many times the previous item occurs within the pattern.9 Thrifty (Minimal) Quantifiers Placing a "?" after a quantifier disables greedyness. * + ? {count} {min. Minimal quantifiers match as few characters as possible and still be true. my $string = "12340000". "thrifty". or "minimal" quantifiers. } if($string =~ m{^(\d+?)0+$}) { print "thrifty '$1'\n".20. meaning that they will match as many characters as possible and still be true.}? {min. some symbols do not translate into a character or character class. Matches the end of the string. Not affected by /m modifier. but will chomp() a "\n" if that was the last character in string. Instead. word "b"oundary A word boundary occurs in four places. Not affected by /m modifier. they translate into a "position" within the string.20. If the /m (multiline) modifier is present. the pattern before and after that anchor must occur within a certain position within the string. matches "\n" also. Use the pos() function to get and set the current \G position within the string. Match the end of string only. If the /m (multiline) modifier is present.10 Position Assertions / Position Anchors Inside a regular expression pattern. If a position anchor occurs within a pattern. matches "\n" also. If this is the first regular expression begin performed on the string then \G will match the beginning of the string. 115 of 138 . Matches the end of the string only. Indicates the position after the character of the last pattern match performed on the string. Match the beginning of string only. 1) at a transition from a \w character to a \W character 2) at a transition from a \W character to a \w character 3) at the beginning of the string 4) at the end of the string \B \G NOT \b usually used with /g modifier (probably want /c modifier too). ^ $ \A \z \Z \b Matches the beginning of the string. The \G anchor represents the position within the string where the previous regular expression finished. using the \G anchor to indicate the location where the previous regular expression finished. if($test1=~m{\bjump\b}) { print "test1 matches\n". Without the "cg" modifiers. The "cg" modifiers tells perl to NOT reset the \G anchor to zero if the regular expression fails to match. unless($test2=~m{\bjump\b}) { print "test2 does not match\n".20.2 The \G Anchor The \G anchor is a sophisticated anchor used to perform a progression of many regular expression pattern matches on the same string. The first time a regular expression is performed on a string. the \G anchor represents the beginning of the string. } my $test2='Pick up that jumprope. } > test1 matches > test2 does not match 20.'. The \G anchor is usually used with the "cg" modifiers. This will allow a series of regular expressions to operate on a string. The location of the \G anchor within a string can be determined by calling the pos() function on the string.'. 116 of 138 . the first regular expression that fails to match will reset the \G anchor back to zero.10. The pos() function will return the character index in the string (index zero is to the left of the first character in the string) representing the location of the \G anchor. This example matches "jump" but not "jumprope": my $test1='He can jump very high.1 The \b Anchor Use the \b anchor when you want to match a whole word pattern but not part of a word. Assigning to pos($str) will change the position of the \G anchor for that string.10. the script prints the pos() value of the string. my $str = "Firstname: John Lastname: Smith Bookcollection: Programming Perl. resulting in a much slower script. 20. Notice how the pos() value keeps increasing. $str =~ m{pattern}modifiers. Modifiers are placed after the regular expression. print "pos is ".?}cg) { print "book is '$1'\n".The example below uses the \G anchor to extract bits of information from a single string. $str =~ s{oldpatt}{newpatt}modifiers. Impatient Perl". $str=~m{\GBookcollection: }cg. $str =~ tr{oldset}{newset}modifiers. The problem is that a substitution creates a new string and and copies the remaining characters to that string. the difference can be quite significant. print "pos is ". outside any curly braces.]+). print "pos is ". $str=~m{\GFirstname: (\w+) }cg. In the above example. $str=~m{\GLastname: (\w+) }cg. while($str=~m{\G\s*([^."\n".pos($str). 117 of 138 .pos($str). the speed difference would not be noticable to the user. but if you have a script that is parsing through a lot of text. my $first = $1. my $last = $1.11 Modifiers Regular expressions can take optional modifiers that tell perl additional information about how to interpret the regular expression. After every regular expression.pos($str). } > > > > > > > > pos is 16 pos is 32 book is 'Programming Perl' pos is 65 book is 'Perl Cookbook' pos is 80 book is 'Impatient Perl' pos is 95 Another way to code the above script is to use substitution regular expressions and substitute each matched part with empty strings."\n". Perl Cookbook."\n". 20. If the "s" modifier is used.11. This allows the pattern to be spread out over multiple lines and for regular perl comments to be inserted within the pattern but be ignored by the regular expression engine." will NOT match "\n" ^ and $ position anchors will match literal beginning and end of string and also "\n" characters within the string.11. then the default behaviour (the "m" behaviour) is to treat the string as multiple lines. (DEFAULT) ". With the "s" modifier. then the "^" anchor will only match the literal beginning of the string. the "$" anchor will match the end of the string or any "\n" characters. s 20. or tr{}{}. ^ matches after "\n" $ matches before "\n" o compile the pattern Once. m{cat}i matches cat. ^ and $ position anchors will only match literal beginning and end of string." character set will match any character EXCEPT "\n". then the "^" anchor will match the beginning of the string or "\n" characters. and the ". ^ and $ indicate start/end of "line" instead of start/end of string. etc ignore spaces and tabs and carriage returns in pattern. m treat string as Multiple lines. If the "m" modifier is used. ". If neither a "m" or "s" modifier is used on a regular expression." will match "\n" within the string. using "^" and "$" to indicate start and end of lines.1 Global Modifiers The following modifiers can be used with m{}. CAT. and the ". even if the string is multiple lines with embedded "\n" characters. CAt.2 The m And s Modifiers The default behaviour of perl regular expressions is "m". perl will default to "m" behaviour. the "$" anchor will only match the literal end of string. 118 of 138 . If a string contains multiple lines separated by "\n". i x case Insensitive. treating strings as a multiple lines. CaT. possible speed improvement. treat string as a Single line. the "s" modifier forces perl to treat it as a single line. s{}{}." class will match any character including "\n". Cat. $string =~ m{Library: (. the captured string includes the newline "\n" characters which shows up in the printed output.*)}s."Impatient Perl". The singleline version prints out the captured pattern across three different lines. Notice in the "s" version. $string =~ m{Library: (.*)}.This example shows the exact same pattern bound to the exact same string. print "multiline is '$1'\n". > > > > > default is 'Programming Perl ' multiline is 'Programming Perl ' singleline is 'Programming Perl Perl Cookbook Impatient Perl' 119 of 138 . print "default is '$1'\n". print "singleline is '$1'\n".*)}m. $string =~ m{Library: (. my $string = "Library: Programming Perl \n" ."Perl Cookbook\n" . The only difference is the modifier used on the regular expression. 20.11.3 The x Modifier'; $string { ^ \s* \s* ( ( ( \s* }x; my my my my =~ m ([-+]?) # positive or negative or optional \d+ ) # integer portion \. \d+ )? # fractional is optional e \s* [+-]? \d+)? # exponent is optional || || || || ''; ''; ''; ''; $sign = $1 $integer = $2 $fraction = $3 $exponent = $4 print print print print > > > > "sign is '$sign'\n"; "integer is '$integer'\n"; "fraction is '$fraction'\n"; explicitely. 120 of 138 20.12 Modifiers For m{} Operator The following modifiers apply to the m{pattern} operator only: g cg Globally find all matchs. Without this modifier, m{} will find the first occurrence of the pattern. Continue search after Global search fails. This maintains the \G marker at its last matched position. Without this modifier, the \G marker is reset to zero if the pattern does not match. 20.13 Modifiers for s{}{} Operator The following modifiers apply to the s{oldpatt}{newpatt} operator only. g e Globally replace all occurrences of oldpatt with newpatt interpret newpatt as a perl-based string Expression. the result of the expression becomes the replacement string. 20.14 Modifiers for tr{}{} Operator The following modifiers apply to the tr{oldset}{newset} operator only. c d s Complement the searchlist Delete found but unreplaced characters Squash duplicate replaced chracters 20.15 The qr{} function The qr{} function takes a string and interprets it as a regular expression, returning a value that can be stored in a scalar for pattern matching at a later time. my $number = qr{[+-]?\s*\d+(?:\.\d+)?\s*(?:e\s* [+-]?\s*\d+)?}; 20.16 Common Patterns. 121 of 138 20.17 Regexp::Common' 21 Parsing with Parse::RecDescent. 122 of 138. 123 of 138 print "> ". I'm a doctor. Bones. > He's dead. Bones. while(<>) { $parse->McCoy($_). Jim.\n". Jim! Dammit. my $parse = new Parse::RecDescent($grammar). Jim. I'm a doctor. print "Dammit. print "press <CTL><D> when finished\n". not a magician! Just do what you can. } "dead.\n". Jim!\n".} curse: 'Dammit' | 'Goddammit' name: 'Jim' | 'Spock' | 'Scotty' job: 'magician' | 'miracle worker' | 'perl hacker' pronoun: "He's" | "She's" | "It's" | "They're" }. not a" job "!" "Just do what you can. print "try typing these lines:\n".Here is a quick example from perldoc Parse::RecDescent use Parse::RecDescent. } > > > > > > > > > > > > try typing these lines: He's dead." name "!" "Shut up. or pronoun 124 of 138 . Jim! Shut up. not a magician!\n". > You green blooded Vulcan! ERROR (line 1): Invalid McCoy: Was expecting curse. Bones. Jim. not a magician! press <CTL><D> when finished > Dammit." { print | pronoun { print | <error> name ". print "> ". print "He's dead. I'm a doctor. Bones. I'm a doctor. my $grammar = q{ McCoy: curse ". Parse::RecDescent currently must read in the entire text being parsed. 22 Perl. etc. and Tk So far. the widget will not show up in the GUI). The ->grid method invokes the geometry manager on the widget and places it somewhere in the GUI (without geometry management. If the data you are mining is fairly complex. drawing the GUI. Once installed. GUI. then Parse::RecDescent would be the next logical module to look at. Perl has a Graphical User Interface (GUI) toolkit module available on CPAN called Tk. the word "Hi" is printed to the command line (the -command option). When the above example is executed.-column=>1). At the command line. $top->Button ( -text=>"Hello". cpan> install Tk The Tk module is a GUI toolkit that contains a plethora of buttons and labels and entry items and other widgets needed to create a graphical user interface to your scripts. Parse::RecDescent is slower than simple regular expressions. When the mouse left-clicks on the button. The MainLoop is a subroutine call that invokes the event loop. my $top=new MainWindow. the Tk module has a widget demonstration program that can be run from the command line to see what widgets are available and what they look like. 125 of 138 . so you need to learn basic regular expression patterns before you use Parse::RecDescent. MainLoop. the longer it takes to run. And Parse::RecDescent has an additional learning curve above and beyond perl's regular expressions. > widget Here is a simple "hello world" style program using Tk: use Tk. Since it recursively descends through its set of rules. Parse::RecDescent uses perl"s regular expression patterns in its pattern matching. -command=>sub{print "Hi\n". a small GUI will popup on the screen (the MainWindow item) with a button (the $top->Button call) labeled "Hello" (the -text option). all the examples have had a command line interface.Parse::RecDescent is not the solution for every pattern matching problem. responding to button clicks. User input and output occurred through the same command line used to execute the perl script itself. and normal regular expressions are becoming difficult to manage. which can be a problem if you are parsing a gigabyte-sized file.} )->grid(-row=>1. type "widget" to run the demonstration program. the more rules. And it would be impossible to give any decent introduction to this module in a page or two. 126 of 138 .Several large books have been written about the Tk GUI toolkit for perl. I recommend the "Mastering Perl/Tk" book. If you plan on creating GUI's in your scripts. while not being considered responsible for modifications made by others. to use that work under the conditions stated herein. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work. It complements the GNU General Public License. that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. PREAMBLE The purpose of this License is to make a manual. which means that derivative works of the document must themselves be free in the same sense. But this License is not limited to software manuals.2001. either commercially or noncommercially.2002 Free Software Foundation. it can be used for any textual work. Inc. Any member of the public is a licensee. textbook. with or without modifying it. Suite 330.23 GNU Free Documentation License Version 1. but changing it is not allowed. or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it. The "Document". 59 Temple Place.2. because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. unlimited in duration. Secondarily. regardless of subject matter or whether it is published as a printed book. You accept the license if you 127 of 138 . 1. this License preserves for the author and publisher a way to get credit for their work. November 2002 Copyright (C) 2000. royalty-free license. MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document. Such a notice grants a world-wide. We recommend this License principally for works whose purpose is instruction or reference. This License is a kind of "copyleft". 0. We have designed this License in order to use it for manuals for free software. which is a copyleft license designed for free software. refers to any such manual or work. below. Boston. in any medium. and is addressed as "you". ) The relationship could be a matter of historical connection with the subject or with related matters. The "Cover Texts" are certain short passages of text that are listed. A "Modified Version" of the Document means any work containing the Document or a portion of it. SGML 128 of 138 . that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor. A copy made in an otherwise Transparent file format whose markup. The "Invariant Sections" are certain Secondary Sections whose titles are designated. or absence of markup. A Front-Cover Text may be at most 5 words. ethical or political position regarding them. modify or distribute the work in a way requiring permission under copyright law. in the notice that says that the Document is released under this License. has been arranged to thwart or discourage subsequent modification by readers is not Transparent..copy. LaTeX input format. Examples of suitable formats for Transparent copies include plain ASCII without markup. in the notice that says that the Document is released under this License. A "Transparent" copy of the Document means a machine-readable copy. a Secondary Section may not explain any mathematics. and a Back-Cover Text may be at most 25 words. The Document may contain zero Invariant Sections. philosophical. An image format is not Transparent if used for any substantial amount of text. or of legal. commercial. as being those of Invariant Sections. or with modifications and/or translated into another language. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. if the Document is in part a textbook of mathematics. either copied verbatim. Texinfo input format. A copy that is not "Transparent" is called "Opaque". (Thus. as Front-Cover Texts or Back-Cover Texts. and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. If the Document does not identify any Invariant Sections then there are none. represented in a format whose specification is available to the general public. XCF and JPG. provided that this License. and standard-conforming simple HTML. such as "Acknowledgements". plus such following pages as are needed to hold. "Title Page" means the text near the most prominent appearance of the work's title. for a printed book. legibly. However. 129 of 138 . PostScript or PDF designed for human modification. you may accept compensation in exchange for copies. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. the title page itself. under the same conditions stated above.or XML using a publicly available DTD. These Warranty Disclaimers are considered to be included by reference in this License. the material this License requires to appear in the title page. and that you add no other conditions whatsoever to those of this License. (Here XYZ stands for a specific section name mentioned below. "Endorsements". the copyright notices. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. and the license notice saying this License applies to the Document are reproduced in all copies. You may also lend copies. PostScript or PDF produced by some word processors for output purposes only. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors. VERBATIM COPYING You may copy and distribute the Document in any medium. SGML or XML for which the DTD and/or processing tools are not generally available. and the machine-generated HTML. 2. but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. and you may publicly display copies. "Dedications". If you distribute a large enough number of copies you must also follow the conditions in section 3.) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. Examples of transparent image formats include PNG. The "Title Page" means. or "History". For works in formats which do not have any title page as such. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. either commercially or noncommercially. preceding the beginning of the body of the text. provided that you release the Modified Version under precisely this License. you must take reasonably prudent steps. Both covers must also clearly and legibly identify you as the publisher of these copies. If you use the latter option. numbering more than 100. and continue the rest onto adjacent pages. that you contact the authors of the Document well before redistributing any large number of copies. you must enclose the copies in covers that carry. You may add other material on the covers in addition. with the Modified Version filling the role of the Document. you should put the first ones listed (as many as fit reasonably) on the actual cover. to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. free of added material. you must either include a machine-readable Transparent copy along with each Opaque copy. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document. Copying with changes limited to the covers. as long as they preserve the title of the Document and satisfy these conditions. and the Document's license notice requires Cover Texts. when you begin distribution of Opaque copies in quantity. but not required. clearly and legibly. to give them a chance to provide you with an updated version of the Document. can be treated as verbatim copying in other respects. and Back-Cover Texts on the back cover. It is requested. If the required texts for either cover are too voluminous to fit legibly. thus licensing distribution 130 of 138 . or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above. all these Cover Texts: Front-Cover Texts on the front cover. 4.3. The front cover must present the full title with all words of the title equally prominent and visible. If you publish or distribute Opaque copies of the Document numbering more than 100. and add to it an item stating at least the title. Such a section 131 of 138 . together with at least five of the principal authors of the Document (all of its principal authors. in the form shown in the Addendum below. K. G. unless they release you from this requirement. you must do these things in the Modified Version: A. If there is no section Entitled "History" in the Document. Preserve its Title. and from those of previous versions (which should. as the publisher. be listed in the History section of the Document). F. E. one or more persons or entities responsible for authorship of the modifications in the Modified Version. and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. J. Section numbers or the equivalent are not considered part of the section titles. In addition. Preserve the section Entitled "History". if it has fewer than five). unaltered in their text and in their titles. new authors. authors. create one stating the title. Include. Preserve the network location. I. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. Preserve all the Invariant Sections of the Document. B. H. C. year.and modification of the Modified Version to whoever possesses a copy of it. Use in the Title Page (and on the covers. if any) a title distinct from that of the Document. and publisher of the Document as given on its Title Page. and publisher of the Modified Version as given on the Title Page. and likewise the network locations given in the Document for previous versions it was based on. then add an item describing the Modified Version as stated in the previous sentence. L. if any. or if the original publisher of the version it refers to gives permission. For any section Entitled "Acknowledgements" or "Dedications". as authors. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself. Delete any section Entitled "Endorsements". Include an unaltered copy of this License. given in the Document for public access to a Transparent copy of the Document. immediately after the copyright notices. Preserve the Title of the section. D. if there were any. State on the Title page the name of the publisher of the Modified Version. M. year. List on the Title Page. You may use the same title as a previous version if the original publisher of that version gives permission. Preserve all the copyright notices of the Document. a license notice giving the public permission to use the Modified Version under the terms of this License. on explicit permission from the previous publisher that added the old one. You may add a passage of up to five words as a Front-Cover Text. statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. under the terms defined in section 4 above for modified versions.may not be included in the Modified Version. N. and that you preserve all their Warranty Disclaimers. you may at your option designate some or all of these sections as invariant. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. and a passage of up to 25 words as a Back-Cover Text. and multiple identical Invariant Sections may be replaced with a single copy. unmodified. you may not add another. make the title of each such section unique by 132 of 138 . You may add a section Entitled "Endorsements". If there are multiple Invariant Sections with the same name but different contents. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. The combined work need only contain one copy of this License. COMBINING DOCUMENTS You may combine the Document with other documents released under this License. to the end of the list of Cover Texts in the Modified Version. If the Document already includes a cover text for the same cover. but you may replace the old one. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document. provided that you include in the combination all of the Invariant Sections of all of the original documents. previously added by you or by arrangement made by the same entity you are acting on behalf of. provided it contains nothing but endorsements of your Modified Version by various parties--for example. add their titles to the list of Invariant Sections in the Modified Version's license notice. 5. These titles must be distinct from any other section titles. and list them all as Invariant Sections of your combined work in its license notice. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. To do this. or the electronic equivalent of covers if the Document is in electronic form. and replace the individual copies of this License in the various documents with a single copy that is included in the collection. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works. provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. 133 of 138 . the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License. 6. and any sections Entitled "Dedications". is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. forming one section Entitled "History". and distribute it individually under this License. or else a unique number. If the Cover Text requirement of section 3 is applicable to these copies of the Document. You may extract a single document from such a collection. you must combine any sections Entitled "History" in the various original documents. likewise combine any sections Entitled "Acknowledgements". In the combination. Otherwise they must appear on printed covers that bracket the whole aggregate. this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. When the Document is included in an aggregate. in parentheses. You must delete all sections Entitled "Endorsements". and follow this License in all other respects regarding verbatim copying of that document.adding at the end of it. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. provided you insert a copy of this License into the extracted document. 7. in or on a volume of a storage or distribution medium. the name of the original author or publisher of that section if known. then if the Document is less than one half of the entire aggregate. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new.org/copyleft/. sublicense or distribute the Document is void. If the Document specifies that a particular numbered version of this License "or any later version" applies to it. Replacing Invariant Sections with translations requires special permission from their copyright holders. modify.8. If a section in the Document is Entitled "Acknowledgements". so you may distribute translations of the Document under the terms of section 4. 10. the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. However. Such new versions will be similar in spirit to the present version. and any Warranty Disclaimers. You may include a translation of this License. sublicense. TRANSLATION Translation is considered a kind of modification. 9. TERMINATION You may not copy. or "History". but may differ in detail to address new problems or concerns. modify. or rights. the original version will prevail. revised versions of the GNU Free Documentation License from time to time. Each version of the License is given a distinguishing version number.gnu. from you under this License will not have their licenses terminated so long as such parties remain in full compliance. See. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer. and will automatically terminate your rights under this License. you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. "Dedications". If the Document does not specify a version 134 of 138 . Any other attempt to copy. but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. provided that you also include the original English version of this License and the original versions of those notices and disclaimers. and all the license notices in the Document. parties who have received copies. or distribute the Document except as expressly provided for under this License. or some other combination of the three. If your document contains nontrivial examples of program code. such as the GNU General Public License.. If you have Invariant Sections. If you have Invariant Sections without Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (c) YEAR YOUR NAME. to permit their use in free software. you may choose any version ever published (not as a draft) by the Free Software Foundation. we recommend releasing these examples in parallel under your choice of free software license. no Front-Cover Texts. Version 1. distribute and/or modify this document under the terms of the GNU Free Documentation License. with the Front-Cover Texts being LIST. and with the Back-Cover Texts being LIST. Permission is granted to copy.." line with this: with the Invariant Sections being LIST THEIR TITLES. merge those two alternatives to suit the situation. replace the "with. with no Invariant Sections.number of this License. ADDENDUM: How to use this License for your documents To use this License in a document you have written. and no Back-Cover Texts. 135 of 138 .2 or any later version published by the Free Software Foundation. Front-Cover Texts and Back-Cover Texts.Texts. .....108 \z..............................17 \A.........51 Control Flow...............11 e.....................................100 -r.................... 12 Backtick Operator............102 bless....29 Autovivify......................................87 continue.............Alphabetical Index \G.......62 Arguments................... .......................25 <...........................18 each..............94 @INC......... ..24 Compiling................108 \W....36 exp.............. .............24 <$fh>....................89 Default Values.............13 Class........47 Comprehensive Perl Archive Network...............110 \b................44 DESTROY.........24 >=.................................... 87 BLOCK.........24 <=............100 -S...............................108 \w..............................51 cos.......... ...........99 ==................46 Anonymous Subroutines..........110 -d......................25 Anonymous Referents.............11..71 ** operator............................24 @_......................66 can........102 BEGIN......... 89 CPAN...........................................110 \d............93 Comparators....................86 Do What I Mean.................24 Command Line Arguments................. ....................100 -p...............68 chomp.............................................90 CPAN....100 -T....25 ||=...........107 cmp......................108 CHECK...................51 Booleans...........102 ^...........24 >......107 Character Classes......111 `........................51 elsif................................11.. The Web Site......17 FALSE.................25 abs...73.14 constructors...................................25 DEGREES.............79 Capturing Parenthesis..................................................................83 close...........18 exponentiation.......110 &&..............25 !=..11........................100 !............16 anchor...................................98 CLOSURES......68 Complex Data Structures................................................108 \S..17 CPAN..........108 \G........................37 dereferencing.......110 \s.... The Perl Module.........24 <=>...........100 -l...........11 Double quotes......................110 \Z.62 Arrays.......24 exists............108 \D........................ ......................68 bind............13 DWIM...................................63 @ARGV....100 -f...24 ||.................................58 Clustering parentheses.................51 END............17 delete....... ...23 caller()...23 136 of 138 ......68 eq............110 and............................. 111 \B..............................100 -w.............. 89 concatenation.......100 -z..........100 -e...................38 else..........110................... ..... 87 Multidimensional Arrays...16 Numify..52 ref()...................53 Package Declaration........81 pseudorandom number.....42 local()................83 pop.....100 for...75.........119 h2xs...........................31 Position Anchors.....24 length............24 next LABEL....................11 Garbage Collection....................................................92 Plain Old Documentation......100 find................15 keys.........32.....109 quotes...88 Package................. ..98 Operating System............18 read.........................65 Returning False..............25 lt.....110 Procedural Perl...........49 Named Referents..35 Header..............68 int.....................100 File Tree Searching............ 51 Fubar....................................100 File Tests.........................................53 package QUALIFIED...................................81 Object Oriented Review.109 GUI.....................57 ge.....................77.117 PERL5LIB..................102 m And s Modifiers.........37 Label.....34 round...........25 our.............72 Inheritance.....44 Regexp::Common.53 Overriding Methods..16 Interpreting....51 Implied Arguments....112 Modules............98 redo LABEL.........24 m......10 References.........55 List Context.........55 Lexical Variables................................................18 logarithms........... 88 Minimal.........................87 oct..71 perldoc.....60 log..53 ne......................102 137 of 138 ..................................86 Object Oriented Perl......50 Reference Material......52 not..72 return..........101 or....................................16 s.............25 Numeric Functions.......................... ......92 pm...........116 Regular Expressions.........79 join.................14 require........................105 Method............51......................................13 qw.................102 repetition.....................113 main............................92 Hashes.......75 isa.....30 qr.........18 random numbers.....................................24 Getopt::Declare...........17 rand.........109 Modifiers.98 File Globbing...............................File.....20 Object Destruction....................44 referent...............................45 Named Subroutines............. ...................................64 import....... ..................................92 Polymorphism...100 Greedy......................................69 POD......51 foreach.18 push...........68 INVOCANT..................21 open.......109 Metacharacters.................61 NAMESPACE.......................52 le. 88 INIT.......... 52 last LABEL.....100 File::Find .115 Quantifiers..........................................95 glob.......53 Parse::RecDescent....18 Logical Operators..53 Maximal..........69.................65 reverse.................10 if...14 Lexical Scope..............................110 Position Assertions................................15 RADIANS.......... ...............................71 use Module...........................13 sort....................................102 TMTOWTDI...23 truncate.......10 seed the PRNG............11 tr........................................87 values...............17 srand.............101 tan...61 substr..18 shift.14 sprintf........scalar (@array).......................51 unshift....................................................................109 Tk............16 Undefined..17 Thrifty................24 splice......... 88 system()...19 Strings....................13 Stringification of References................70 use base............25 The End..17 Single quotes............................................ 108 sin......51 use.... 138 of 138 ........33 spaceship operator......................119 Tk::ExecuteCommand......................................84..................114 x operator.............67 while.......18 String Literals..............................31 until.....13 Subroutines......38 wantarray...................20 sqrt.14 SUPER::.14 xor...........102 TRUE.........49 Stringify.......34 split...........21 unless......................51 write.99 x Modifier.............................................................................31 Shortcut Character Classes ...30 Scalars...............12 Script Header.............78 use lib.
https://www.scribd.com/doc/35877901/iperl
CC-MAIN-2018-13
refinedweb
32,333
70.5
Java 11 & Spring Boot 2.2 Tutorial: Build your First REST API App In this tutorial, you will learn to build your first REST API web application with Java 11, Spring 5 and Spring Boot 2.2. We'll also use NetBeans 11 as the IDE. For quickly initializing our Spring Boot application, we'll use Spring Initializr. In our example, we'll be using Spring MVC and an embedded Tomcat server to serve our application locally by inlcuding the Spring Web Starter as a dependency of our project. Spring is an open source Java EE (Enterprise Edition) framework that makes developing Java EE applications less complex by providing support for a comprehensive infrastructure and allowing developers to build their applications from Plain Old Java Objects or POJOS. Spring relieves you from directly dealing with the underlying and complex APIs such as transaction, remote, JMX and JMS APIs. Spring framework provides Dependency Injection and Inversion of Control out of the box which helps you avoid the complexities of managing objects in your application. As of Spring Framework 5.1, Spring requires JDK 8+ (Java SE 8+) and provides out of the box support for JDK 11 LTS. Spring Boot allows you to quickly get up and running with Spring framework. It provides an opinionated approach build a Spring application. Prerequisites You will need a bunch of prerequisites to successfully follow this tutorial and build your web application: - Java 11+ installed on your system. If you are using Ubuntu, check out this post for how to install Java 11 on Ubuntu, - Gradle 4.10+, - NetBeans 11, - Working knowledge of Java. Initializing a Spring 5 Project Let's now start by creating a Spring 5 project. We'll make use of the official Spring Initializr generator via its web interface. Note: You can also use the Spring Initializr generator as a CLI tool. Check out all the ways you can use it from this link. Head to the web UI of Spring Initializr and let's bootstrap our application. You'll be presented with the following interface for choosing various configuration options: - For Project, select Gradle Project, - For Language, select Java, - For Spring Boot, select 2.2.0 M3, - Under Options, make sure to select at least Java 11. You can also seed your project with any needed dependencies under Dependencies. You can search for a dependency or select it from a list. We'll add the Spring Web Starter dependency which includes Spring MVC and Tomcat as the default embedded container. This will allow us to serve our Spring 5 web application using the Tomcat server. Spring Boot starters help you quickly create Spring Boot projects without going through tedious dependency management. If you want to build a REST API web app, you would need to add various dependencies such as Spring MVC, Tomcat and Jackson. A starter allows you to add a single dependency instead of manually adding all these required dependencies. In this example, we added the Web starter ( spring-boot-starter-web) via the UI. You can find the list of available starters from this link. Fill in the other Project Metadata and click on Generate the project. Once you click on the Generate the project button, your project will be downloaded as a zip file. Open the file and extract it in your working folder. Open your favorite Java IDE. I'll be using Netbeans 11. If this is your first time using Netbeans, you'll be asked to download some dependencies like nbjavac and Gradle 4.10.2 (As the time of this writing) since our Spring 5 project is using this version of Gradle which is not installed on our system. In the files pane of the IDE, let's browse to the src/main/java/com/firstspringapp/demo/DemoApplication.java file: Note: The path and name of the bootstrapping file may be different for you depending on your chosen Package and Artifact names when you initialized the project. Our Spring 5 application is bootstrapped from the DemoApplication.java file. Let's understand the code in this file: package com.firstspringapp.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } We first import the SpringApplication and SpringBootApplication from their respective packages. Next, we declare a Java class and we annotate it with @SpringBootApplication annotation. In the main() method of our class, we call the Spring Boot’s run() method from SpringApplication to launch our Spring 5 application. @SpringBootApplication is a shorthand annotation that calls all the following annotations: @Configuration: makes the class, a source of bean definitions for the application context. @EnableAutoConfiguration: this annotation configures Spring Boot to add beans based on classpath settings and any property settings. @EnableWebMvc: typically, you would need to add the @EnableWebMvcannotation for a Spring MVC app, but Spring Boot adds it automatically when it finds spring-webmvcon the classpath. This annotates your application as a web application. @ComponentScan: this annotation configures Spring to look for other components in the firstspringapp.demopackage. In the next section, we'll see how to add a controller class and Spring will automatically find it without adding any extra configuration. Serving our Spring 5 Application with the Embedded Tomcat Server Now, let's run and serve our Spring web app. In your IDE, use click on the green Run project button or F6 on your keyboard (or also the Run -> Run project menu). This will build (if not already built) and run your project. You should get the following output: From the output window, we can see that our project is using Oracle Java 11 (In the JAVA_HOME variable). You can see that the IDE has navigated to our project's folder and executed the ./gradlew --configure-on-demand -x check bootRun command to run our web application which has executed many tasks between them bootRun. According to the official docs: The Spring Boot Gradle plugin also includes a bootRuntask that can be used to run your application in an exploded form. The bootRuntask is added whenever you apply the org.springframework.bootand javaplugins. From the output, you also see that our web application is served locally using the embedded TomcatWebServer on the 8080 port: This is because we've added the Spring Web Starter dependency when initializing our project (If your project's classpath contains the classes necessary for starting a web server, Spring Boot will automatically launch it.) See Embedded Web Servers for more information. Our web application is running at. At this point, if you visit this address with your web browser, you should see the following page: We are getting the Whitelable Error Page because at this point, we don't have any REST controllers mapped to the "/" path. Let's change that! Creating our First Spring 5 REST Controller Let's now create our first REST controller with Spring. In the src/main/java/com/firstspringapp/demo folder, create a new Java file (you can call it FirstController.java) and add the following code: package com.springfirstapp.demo; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RequestMapping; @RestController public class FirstController { @RequestMapping("/") public String index() { return "Hello Spring Boot!"; } } This is simply a Java class, annotated with @RestController, which makes it ready for use by Spring MVC to handle HTTP requests. We added an index() method (You can actually call it whatever you want) annotated with @RequestMapping to map the / path to the index() method. The @RequestMapping annotation provides is used to add routing. It tells Spring that the index() method should be called when an HTTP request is sent from the client to the / path. When you visit the / path with a browser, the method returns the Hello Spring Boot! text. Note: @RestControllercombines the @Controllerand @ResponseBodyannotations used when you want to return data rather than a view from an HTTP request. The @RestControllerand @RequestMappingannotations are actually Spring MVC annotations. (i.e they are not specific to Spring Boot). Please refer to the MVC section in the official Spring docs for more information. Now stop and run your project again and go to the address with your web browser, you should see a blank page with the Hello Spring Boot! text. Congratulations! You have created your first controller. Note: We also publish our tutorials on Medium and DEV.to. If you prefer reading in these platforms, you can follow us there to get our newest articles. You can reach the author via Twitter:Follow @ahmedbouchefra About the author
https://www.techiediaries.com/java-spring-boot-rest-api-tutorial/
CC-MAIN-2020-05
refinedweb
1,432
55.74
Automated acceptance testing is essential in today's fast-paced software delivery world. A high quality set of automated acceptance tests helps you deliver valuable features sooner by reducing the wasted time spent in manual testing and fixing bugs. When combined with Behaviour-Driven Development, automated acceptance testing can guide and validate development effort, and help teams focus both on building the features that really matter and ensuring that they work. But automated acceptance testing is not easy; like any other software development activity, it requires skill, practice and discipline. Over time, even teams with the best intent can see their test suites become slow, fragile and unreliable. It becomes increasingly more difficult to add new tests to the existing suite; teams lose confidence in the automated tests, compromising the investment in the test suite, and affecting team morale. We routinely see even experienced teams, using design patterns such as Page Objects, running into this sort of issue. Page Objects are a good place to start for teams not familiar with patterns and design principles used by proficient programmers (e.g. SOLID) but the importance of bringing strong technical skills to the team should be considered early on in a project to avoid these challenges. The Screenplay Pattern (formerly known as the Journey Pattern) is the application of SOLID design principles to automated acceptance testing, and helps teams address these issues. It is essentially what would result from the merciless refactoring of Page Objects using SOLID design principles. It was first devised by Antony Marcano between 2007 - 2008, refined with thinking from Andy Palmer from 2009. It didn’t receive the name "the Journey Pattern" until Jan Molak started working with it in 2013. Several people have written about it under this name already, however, the authors now refer to it as the Screenplay Pattern. The Screenplay Pattern is an approach to writing high quality automated acceptance tests based on good software engineering principles such as the Single Responsibility Principle, and the Open-Closed Principle. It favours composition over inheritance, and employs thinking from Domain Driven Design to reflect the domain of performing acceptance tests, steering you towards the effective use of layers of abstraction. It encourages good testing habits and well-designed test suites that are easy to read, easy to maintain and easy to extend, enabling teams to write more robust and more reliable automated tests more effectively. Serenity BDD is an open source library designed to help you write better, more effective automated acceptance tests, and use these acceptance tests to produce high quality test reports and living documentation. As we will see in this article, Serenity BDD has strong built-in support for using the Screenplay Pattern straight out of the box. The Screenplay Pattern in Action In the rest of this article, we will be using Serenity BDD to illustrate the Screenplay Pattern, though the pattern itself is largely language and framework-agnostic. The application we will be testing is the AngularJS implementation of the well-known TodoMVC project (see Figure 1). Figure 1 The Todo application For simplicity, we will be using Serenity BDD with JUnit, though we could also choose to implement our automated acceptance criteria using Serenity BDD with Cucumber-JVM or JBehave. Now suppose we are implementing the “Add new todo items” feature. This feature could have an acceptance criterion along the lines of “Add a new todo item”. If we were testing these scenarios manually, it might look like this: - Add a new todo item - Start with an empty todo list - Add an item called ‘Buy some milk’ - The ‘Buy some milk’ item should appear in the todo list One of the big selling points of the Screenplay Pattern is that it lets you build up a readable API of methods and objects to express your acceptance criteria in business terms. For example, using the Screenplay Pattern, we could automate the scenario shown above very naturally like this: givenThat(james).wasAbleTo(Start.withAnEmptyTodoList()); when(james).attemptsTo(AddATodoItem.called("Buy some milk")); then(james).should(seeThat(TheItems.displayed(), hasItem("Buy some milk"))); If you have used Hamcrest matchers, this model will be familiar to you. When we use a Hamcrest matcher we are creating an instance of a matcher that will be evaluated in an assertThat method. Similarly, AddATodoItem.called() returns an instance of a ‘Task’ that is evaluated later in the attemptsTo() method. Even if you are not familiar with how this code is implemented under the hood, it should be quite obvious what the test is trying to demonstrate, and how it is going about it. We will see soon how writing this kind of test code is as easy as reading it. Declarative code written in a way that reads like business language is significantly more maintainable and less prone to error than code written in a more imperative, implementation-focused way. If the code reads like a description of the business rules, it is a lot harder for errors in business logic to slip into the test code or into the application code itself. Furthermore, the test reports generated by Serenity for this test also reflect this narrative structure, making it easier for testers, business analysts and business people to understand what the tests are actually demonstrating in business terms (see Figure 2). Figure 2: This Serenity report documents both the intent and the implementation of the test The code listed above certainly reads cleanly, but it may leave you wondering how it actually works under the hood. Let’s see how it all fits together. Screenplay Pattern tests run like any other Serenity test At the time of writing, the Serenity Screenplay implementation integrates with both JUnit and Cucumber. For example, in JUnit, you use the SerenityRunner JUnit runner, as for any other Serenity JUnit tests. The full source code of the test we saw earlier is shown here, where an “Actor” plays the role of a user interacting with the system: @RunWith(SerenityRunner.class) public class AddNewTodos { Actor james = Actor.named("James"); @Managed private WebDriver hisBrowser; @Before public void jamesCanBrowseTheWeb() { james.can(BrowseTheWeb.with(hisBrowser)); } @Test public void should_be_able_to_add_a_todo_item() { givenThat(james).wasAbleTo(Start.withAnEmptyTodoList()); when(james).attemptsTo(AddATodoItem.called("Buy some milk")); then(james).should(seeThat(TheItems.displayed(), hasItem("Buy some milk"))); } } It’s not hard to glean what this test does just by reading the code. There are however a few things here that will be unfamiliar, even if you have used Serenity before. In the following sections, we will take a closer look at the details. The Screenplay Pattern encourages strong the ‘why’ of the scenario in terms of what the user is trying to achieve in business terms. - The tasks describe what the user will do as high-level steps required to achieve this goal. - The actions say how a user interacts with the system to perform a particular task, such as by clicking on a button or entering a value into a field. As we will see, the Screenplay Pattern provides a clear distinction between goals (scenario titles), tasks (top-level of abstraction in the scenario) and actions (the lowest level of abstraction, below the tasks), which makes it easier for teams to write layered tests more consistently. The Screenplay Pattern uses an actor-centric model Tests describe how a user interacts with the application to achieve a goal. For this reason, tests read much better if they are presented from the point of view of the user (rather than from the point of ‘pages’). In the Screenplay Pattern, we call a user interacting with the system an Actor. Actors are at the heart of the Screenplay pattern (see Figure 3). Each actor has one or more. Figure 3: The Screenplay Pattern uses an actor-centric model In Serenity, creating an actor is as simple as creating an instance of the Actor class and providing a name: Actor james = Actor.named("James"); We find it useful to give the actors real names, rather than use a generic one such as “the user”. Different names can be a shorthand for different user roles or personas, and make the scenarios easier to relate to. For more information on using Personas, see Jeff Patton’s talk “Pragmatic Personas” Actors have abilities Actors need to be able to do things to perform their assigned tasks. So we give our actors “abilities”, a bit like the superpowers of a super-hero, if somewhat private WebDriver hisBrowser; We can then let James use this browser as follows:) that keeps track of the things the actor requires in order add a new Ability class to your test classes. Figure 4): Figure 4: The actor invokes the performAs() method on a sequence of tasks Tasks are just objects that implement the Task interface, and need to implement the performAs(actor) method. In fact, you can think of any Task class as basically a performAs() method alongside a supporting cast of helper methods. private OpenTheApplication openTheApplication; … james.attemptsTo(openTheApplication); This works well for very simple tasks or actions, for example ones that take no parameters. But for more sophisticated tasks or actions, a factory or builder pattern (like the one used with our earlier AddATodoItem)); } } High-level tasks are composed: - Enter the todo text in the text field - Press Return The performAs() method in the AddATodoItem class used earlier does exactly that: private final String thingToDo; @Step("{0} adds a todo item called #thingToDo") public <T extends Actor> void performAs(T actor) { actor.attemptsTo( Enter.theValue(thingToDo) .into(NewTodoForm.NEW_TODO_FIELD) .thenHit(RETURN) ); } The actual implementation uses the Enter class, a pre-defined Action class that comes with Serenity. Action classes are very similar to Task classes, except that they focus on interacting directly with the application. Serenity provides a set of basic Action classes for core UI interactions such as entering field values, clicking on elements, or selecting values from drop-down lists. In practice, these provide a convenient and readable DSL that lets you describe common low-level UI interactions needed to perform a task. In the Serenity Screenplay implementation, we use a special Target class to identify elements using (by default) either CSS or XPATH. The Target object associates a WebDriver selector with a human-readable label that appears in the test reports to make the reports more readable. You define a Target object as shown here: Target WHAT_NEEDS_TO_BE_DONE = Target.the( "'What needs to be done?' field").locatedBy("#new-todo") ; Targets are often stored in small Page-Object like classes that are responsible for one thing, knowing how to locate the elements for a particular UI component, such as the ToDoList class shown here: public class ToDoList { public static Target WHAT_NEEDS_TO_BE_DONE = Target.the( "'What needs to be done?' field").locatedBy("#new-todo"); public static Target ITEMS = Target.the( "List of todo items").locatedBy(".view label"); public static Target ITEMS_LEFT = Target.the( "Count of items left").locatedBy("#todo-count strong"); public static Target TOGGLE_ALL = Target.the( "Toggle all items link").locatedBy("#toggle-all"); public static Target CLEAR_COMPLETED = Target.the( "Clear completed link").locatedBy("#clear-completed"); public static Target FILTER = Target.the( "filter").locatedBy("//*[@id='filters']//a[.='{0}']"); public static Target SELECTED_FILTER = Target.the( "selected filter").locatedBy("#filters li .selected"); } Figure 5). Figure 5: Test reports show details about both tasks and UI interactions common to reuse existing tasks to build up more sophisticated business tasks in this way. A convention that we have found useful is to break from the common Java idiom and put the static creation method below the performAs() method. This is because the most valuable information inside a Task is how it is being performed rather than how it is created. Actors can ask questions about the state of the application A typical automated acceptance test has three parts: - Set up some test data and/or get the application into a known state - Perform some action - implementation, we express assertions using a flexible, fluent API quite similar to the one used for Tasks and Actions. In the test shown above, the assertion looks like this: then(james).should(seeThat(TheItems.displayed(), hasItem("Buy some milk"))); The structure of this code is illustrated in Figure 6. Figure 6: A Serenity Screenplay. Questions are rendered in human-readable form in the reports Another nice thing about the Screenplay assertions is that they appear in a very readable form in the test reports, making the intent of the test clearer and error diagnostics easier. (see Figure 8). Figure 8: Question objects are rendered in human-readable form in the test report Actors use their abilities to interact with the system Let’s see this principle in action in another test. The Todo application has a counter in the bottom left hand corner indicating the remaining number of items (see Figure 7). Figure 7: The number of remaining items is displayed in the bottom left corner of the list The test to describe and verify this behavior could look like this: @Test public void should_see_the_number_of_todos_decrease_when_an_item_is_completed() { givenThat(james).wasAbleTo(Start.withATodoListContaining( "Walk the dog", "Put out the garbage")); when(james).attemptsTo( CompleteItem.called("Walk the dog") ); then(james).should(seeThat(TheItems.leftCount(), is(1))); } The test needs to check that the number of remaining items (as indicated by the “items left” counter) is 1. The corresponding assertion is in the last line of the test: then(james).should(seeThat(TheItems.leftCount(), is(1))); The static TheItems.leftCount() method is a simple factory method that returns a new instance of the ItemsLeftCounter class, as shown here: public class TheItems { public static Question<List<String>> displayed() { return new DisplayedItems(); } public static Question<Integer> leftToDoCount() { return new ItemsLeftCounter(); } } This serves simply to make the code read in a fluent fashion. The Question object is defined by the ItemsLeftCounter class. This class has one very precise responsibility: to read the number in the remaining item count text displayed at the bottom of the todo list. Question objects are similar to Task and Action objects. However, instead of the performAs() used for Tasks and Actions, a Question class needs to implement the answeredBy(actor) method, and return a result of a specified type. The ItemsLeftCounter is configured to return an Integer. public class ItemsLeftCounter implements Question<Integer> { @Override public Integer answeredBy(Actor actor) { return Text.of(TodoCounter.ITEM_COUNT) .viewedBy(actor) .asInteger(); } } The Serenity Screenplay implementation provides a number of low-level UI interaction classes that let you query your web page in a declarative way. In the code above, the answeredBy() method uses the Text interaction class to retrieve the text of the remaining item count and to convert it to an integer. As shown previously, the location logic has been refactored into the TodoList class: public static Target ITEMS_LEFT = Target.the("Count of items left"). locatedBy("#todo-count strong"); Once again, this code works at three levels, each with distinct responsibilities: - The top level step makes an assertion about the state of the application: then(james).should(seeThat(TheItems.leftCount(), is(1))); - The ItemsLeftCounter Question class queries the state of the application and provides the result in the form expected by the assertion; - The TodoList class stores the location of web elements used by the Question class. Writing custom UI interactions The Serenity Screenplay implementation comes with a range of low-level UI interaction classes. There may be very rare cases where these don’t meet your needs. In this case, it is possible to interact directly with the WebDriver API. You do this by writing your own Action class, which is easy to do. For example, suppose we want to delete an item in the todo list, using code along the following lines: when(james).attemptsTo( DeleteAnItem.called("Walk the dog") ); Now, for reasons related to the implementation of the application, the Delete button does not accept a normal WebDriver click, and we need to invoke the JavaScript event directly. You can see the full class in the sample code, but the performAs() method of the DeleteAnItem task uses a custom Action class called JSClick to trigger the JavaScript event: @Step("{0} deletes the item '#itemName'") public <T extends Actor> void performAs(T theActor) { Target deleteButton = TodoListItem.DELETE_ITEM_BUTTON.of(itemName); theActor.attemptsTo(JSClick.on(deleteButton)); } The JSClick class is a simple implementation of the Action interface, and looks like this: public class JSClick implements Action { private final Target target; @Override @Step("{0} clicks on #target") public <T extends Actor> void performAs(T theActor) { WebElement targetElement = target.resolveFor(theActor); BrowseTheWeb.as(theActor).evaluateJavascript( "arguments[0].click()", targetElement); } public static Action on(Target target) { return instrumented(JSClick.class, target); } public JSClick(Target target) { this.target = target; } } The important code here is in the performAs() method, where we use the BrowseTheWeb class to access the actor’s Ability to use a browser. This gives full access to the Serenity WebDriver API: BrowseTheWeb.as(theActor). evaluateJavascript("arguments[0].click()", targetElement); (Note that this is a contrived example – Serenity already provides an interaction class to inject Javascript into the page as well). Page Objects become smaller and more specialized An interesting consequence of using the Screenplay pattern is that it changes the way you use and think about Page Objects. The idea of a Page Object is to encapsulate the UI-related logic that accesses or queries a web page, or a component on a web page, behind a more business-friendly API. As a concept, this is fine. But the problem with Page Objects (and with traditional Serenity step libraries, for that matter) is it can be hard to keep them well organized. They tend to grow, becoming bigger and harder to maintain as the test suite grows. This should be no surprise since such page objects violate both the Single Responsibility Principle (SRP) and Open-Closed Principle (OCP) – the ‘S’ and the ‘O’ in SOLID. Many test suites end up with complex hierarchies of Page Objects, inheriting “common” behavior such as menu bars or logout buttons from a parent Page Object, which violates the principle of favoring composition over inheritance. New tests typically need modifications to existing Page Object classes, introducing the risk of bugs. When you use the Screenplay Pattern, your Page Objects tend to become smaller and more focused, with a clearly defined mandate of locating elements for a particular component on the screen. Once written, they tend to remain unchanged unless the underlying web interface changes. BDD Style scenarios are not mandatory Some people writing acceptance tests in an xUnit framework may not like the Given/When/Then style of writing scenarios. These methods are there purely for readability, helping you make your intent explicit by expressing where you arrange (given), act (when) and assert (then). Not everyone likes this style, and so you are not restricted to it. As an alternative you can write: james.wasAbleTo(Start.withAnEmptyTodoList()); james.attemptsTo(AddATodoItem.called("Buy some milk")); james.should(seeThat(toDoItems, hasItem("Buy some milk"))); The intent is implicit in the ‘wasAbleTo’, ‘attemptsTo’ and ‘should’ methods, however we believe that making our intent explicit will benefit us and anyone else who will read our code later, and so we would recommend using the built in givenThat(), when(), then() methods. If you are using this approach in Cucumber, you can leave out the Given/When/Then methods as the intent is generally explicit in the Cucumber step definitions. Conclusion The Screenplay Pattern is an approach to writing automated acceptance tests founded on good software engineering principles that makes it easier to write clean, readable, scalable, and highly maintainable test code. It is one possibile outcome of the merciless refactoring of the Page Object pattern towards SOLID principles. The new support for the Screenplay Pattern in Serenity BDD opens a number of exciting possibilities. In particular: - The declarative writing style encouraged by the Screenplay Pattern makes it much simpler to write code that is easier to understand and maintain; - Task, Action and Question classes tend to be more flexible, reusable and readable than traditional Serenity step methods; - Separating the abilities of an actor adds a great deal of flexibility. For example, it is very easy to write a test with several actors using different browser instances. Like many good software development practices, the Screenplay Pattern takes some discipline to start with. Some care is initially required to design a readable, DSL-like API made up of well-organised tasks, actions and questions. However, the benefits become apparent quite quickly when the test suite scales, and these libraries of reusable components help accelerate the test writing process to a sustainable rate, reducing the friction usually associated with the ongoing maintenance of automated test suites. Further reading This is just an introduction to the Screenplay Pattern and its implementation in Serenity. The best way to learn more is to study working code. The source code for the sample project can be found on Github. References - Designing Usable Apps: An agile approach to User Experience Design by Kevin Matz - “A BIT OF UCD FOR BDD & ATDD: GOALS -> TASKS -> ACTIONS” – Antony Marcano, - “A journey beyond the page object pattern” - Antony Marcano, Jan Molak, Kostas Mamalis - JNarrate: The original reference implementation of the Screenplay Pattern - The Command Pattern: “Design Patterns: Elements of Reusable Object-Oriented Software” - Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides - TodoMVC: The Todo application tested in this article About the Authors John Ferguson Smart. LinkedIn, Github, Website Antony Marcano is best known in the community for his thinking on BDD, user stories, testing and writing fluent APIs & DSLs in Ruby, Java and more. He has 16+ years of experience on agile projects and transformations of all shapes and sizes, spending as much of his time as a practitioner as he does as a coach or trainer. He shares his experiences with the community in various ways, including as a contributor to books such as Agile Coaching and Agile Testing with references in Bridging the Communication Gap and Software Craftsmanship Apprenticeship Patterns. His work in the community continues as a regular speaker on agile development at international conferences and as a regular guest speaker at Oxford University. Andy Palmer is co-creator of the ground-breaking PairWith.us screencasts, regular speaker at international conferences; Andy has shortened the delivery cycles of projects across numerous organisations with his expertise in taking complex technology problems and simplifying them to their essence. Andy's expertise in this area has enabled 5 fold reductions in cycle times on large scale projects. With over 15 years of experience, including significant periods as a team and management coach, a developer and a system administrator, Andy has been bridging the communication gap since before DevOps had a name.. Community comments Is this worth the effort? by brian dsouza / Re: Is this worth the effort? by Fadzlan Yahya / Re: Is this worth the effort? by brian dsouza / Is this worth the effort? by brian dsouza / Your message is awaiting moderation. Thank you for participating in the discussion. Hi All, Think its a noble initiative to build out this test approach but is it really worth the effort? Specifications expressed by the user of a system are the ultimate means to understand if a system works for the user. We then need a quick simple means to convert these into an executable format. The act of conversion must be simple and fast and even (in at least 50 -70% of the scenarios) done by the users themselves with little or no knowledge of the test target. In other words those complexities must be abstracted from the tester. There are a bunch of tools available that will do exactly that. Write in Gherkin and convert to executable format with no knowledge of the complexities of the test targets as well as the ability to test across disparate test targets ( mobile, web, API) within the same test ( the ultimate user test) . In all of this, there is no developer or code or framework to be built and the promise of in sprint testing is absolutely achievable. Using the approach I mentioned above, these tests are written before the developer finishes coding and the developer cannot check in their code unless the tests pass. And in all of this, the developer had zero work to instruments tests to satisfy the gherkin statements. Why would we be asking people to invest time and money to build out a framework for testing when we have existing tools available that can be used by non-programmers? Thanks Brian Dsouza Re: Is this worth the effort? by Fadzlan Yahya / Your message is awaiting moderation. Thank you for participating in the discussion. I'll try to answer. I think the promise of page objects or any of the improvements basically are acknowledging that functional test are much harder to maintain than to write one. Re: Is this worth the effort? by brian dsouza / Your message is awaiting moderation. Thank you for participating in the discussion. I could not agree with you more. Its the trap all automation testers fall into. But what is required is that you absolutely apply software design principles to your tests irrespective of whether they are written in Java or as a script. Separation of concern, cohesion, least coupling are critical. Test organization is also important and must match the product feature set. Your test design must ensure that you write once and reuse many times. Test data must be abstracted from the test so that the test can be used in different data contexts without any change to the test itself. Tests must be able to be executed in various contexts (scenarios) and behave differently depending upon the same. So a validating script in one context can become a navigation script in another context ( write once, use multiple times). All of this is absolutely achievable as long as a proper process is followed. Whether you are writing a test or a piece of code, the lack of coding standards and guidelines, or the failure to have a design step will potentially result in an unmanageable artifact. Test design must follow the same rigor as good code design but it does not need programmers to instrument code. Write maintainable artifacts and the cost(time) of maintenance will be low. This is the holy grail as we move towards faster ways to deliver product. Having a product person express their need in a test allows for outcome testing well before the code is fully written and provides very early feedback on usability etc. Having developers create interfaces and mock objects early helps this process and this is time and money well spent as we are unearthing the true intentions of the users and are able to tailor our solution to truly meet their need. My point is that for all of this, we do not need high-end coders to be involved in instrumenting test code. I have done this multiple times and proved that this approach(testers write tests and developers write code) is an excellent marriage of a development mindset and a QA mindset, which however much people may argue that they are not, are different.
https://www.infoq.com/articles/Beyond-Page-Objects-Test-Automation-Serenity-Screenplay/?itm_source=articles_about_Behavior-Driven-Development&itm_medium=link&itm_campaign=Behavior-Driven-Development
CC-MAIN-2020-24
refinedweb
4,517
50.87
why do ppl use pointers> can they just acces cant other then directly? why do ppl use pointers> can they just acces cant other then directly? I think the most common use is to make a function affect more than one variable. Say you want to change the X and Y position of something on the screen. Your function could return one value to be assigned to either X or Y, but not both. You can pass-in as many pointers as you wish. So your function could directly change both X and Y. what the hell is going on, i replied to the same thread as this!!!!, the id's have change! what the hell is going on? Be a leader and not a follower. Ummm... Also, when you pass a variable to a function, you're NOT passing the variable. You are passing the VALUE of the variable. In fact this is called "passing by value". Often the variable name is even different inside the function. (This can be confusing for beginners.) fine everyone, just ignore me! deltabird - did i not reply to your thread, and you said thankyou! I need some evidence.... Be a leader and not a follower. illustration... Code:#include <iostream> using namespace std; void swap_incorrect(int var1, int var2) { int temp; temp = var1; var1 = var2; var2 = temp; } void swap_correct(int *var1, int *var2) { int temp; temp = *var1; *var1 = *var2; *var2 = temp; } int main() { int var1 = 5; int var2 = 10; cout << "default : " << var1 << ' ' << var2; swap_incorrect(var1, var2); cout << "\nswap incorrectly: " << var1 << ' ' << var2; swap_correct(&var1, &var2); cout << "\nswap correctly: " << var1 << ' ' << var2; cout << endl << endl; return 0; } I am against the teaching of evolution in schools. I am also against widespread literacy and the refrigeration of food. I saw a similar question, and your answer. That thread is gone now (deleted I presume). oh well..I saw a similar question, and your answer. That thread is gone now (deleted I presume). oh well..Originally posted by subdene fine everyone, just ignore me! deltabird - did i not reply to your thread, and you said thankyou! I need some evidence.... When all else fails, read the instructions. If you're posting code, use code tags: [code] /* insert code here */ [/code] ahh, thank you! thought i wasn't going mad. wonder why they deleted it? anyways.......... Be a leader and not a follower. Along with the reasons given above, pointers are necessary for runtime polymorphism, i.e. a combo with virtual functions. Also necessary for arrays as they technically cannot be passed to a function as a whole, only the address is passed. Thats much more efficient. And using pointer arithmetic to access array elements is executed faster than indexing an array. [I dont know the details about that fact, but it just is] Almost forgot. Strings - pointers are a must for working with strings. Frankly, I dont know how Java-ers manage. I AM WINNER!!!1!111oneoneomne
https://cboard.cprogramming.com/cplusplus-programming/33325-pointer.html
CC-MAIN-2017-30
refinedweb
488
77.23
Ecto 2 Many to Many Associations One of the features introduced in Ecto 2.0 is the many-to-many relationship; a model-level abstraction over a database's join table. Documentation for linking two models together is scattered about, such as here, here, and here, but it took me a while to piece together the complete picture of how to use this in an app. In this blog post, I want to bring that documentation into one place and overview all of the steps needed to implement many_to_many in a typical Phoenix app. For the code examples, I'll stick to this blog's own source code and discuss how posts are linked to tags. Step 1: Migrations The first thing you'll need to do is add the appropriate tables to your database. That means a table for each of the models, and a join table, which you need to create manually: def change do create table(:posts_tags) do add :post_id, :integer add :tag_id, :integer end end Step 2: Models Next, you'll need to update the schemas in both models: schema "posts" do # existing post schema... many_to_many :tags, AdamczDotCom.Tag, join_through: "posts_tags", on_replace: :delete end schema "tags" do # existing tag schema... many_to_many :posts, AdamczDotCom.Post, join_through: "post_tags" end join_through lets Ecto know where to save the associated ids, and on_replace lets Ecto know what to do when associations that previously existed are not included on a changeset. Note that :delete needs to be used carefully, as it deletes any row not present in an updated changeset. Attempting to remove an association without on_replace raises an error, but it can be omitted if you know you won't do that (e.g., I don't update posts at all via tags so I didn't include it on the tags schema). Finally, cast the association in your changeset: def changeset(post, params \\ %{}) do post |> cast(params, [:title, etc...]) |> cast_assoc(:tags) Step 3: Controller There's a fair amount of work to be done in the controller. The simple case is loading existing data for the index or show action, so I'll present that first, useless as it may be when you don't yet have any associated data to show off. def index(conn, _params) do posts = Repo.all(Post) |> Repo.preload(:tags) render conn, "index.html", posts: posts end In order to render a blank form in the new action, you need to include an empty tags list. This is needed if your new and edit templates share the same form partial, as mine do. Also, you'll want to pull the full list of existing tags in order to present them as options on the form: def new(conn, _params) do changeset = Post.changeset(%Post{tags: []}) tags = Repo.all(Tag) render conn, "new.html", changeset: changeset, tags: tags end The update action is where things got difficult for me. I haven't figured an ideal way to pass nested tags params from the form, so I'm doing a bit of extra work to process the tags params that arrive separately. If anyone knows how to omit this step, please reach out to me! After wrangling the form data into a tags changeset, we start out with the default posts changesest, and smash the two together with Changeset.put_assoc: def update(conn, %{"slug" => slug, "post" => post_params, "tags" => tags}) do tags_to_associate = tag_changeset(tags) post = Repo.get_by(Post, slug: slug) |> Repo.preload(:tags) changeset = Post.changeset(post, post_params) |> Ecto.Changeset.put_assoc(:tags, tags_to_associate) case Repo.update(changeset) do # boilerplate after this... end defp tag_changeset(tags) do tags_to_list(tags) # converts the form params to a list of integers |> get_tag_structs # grabs the appropriate tags from the database |> Enum.map(&Ecto.Changeset.change/1) # wraps those structs in a changeset end defp tags_to_list(tags) do Enum.filter_map(tags, fn(tag) -> String.to_atom(elem(tag, 1)) == true end, fn(tag) -> String.to_integer(elem(tag, 0)) end) end defp get_tag_structs(tag_list) do Tag.by_id_list(tag_list) # model code here is: where([t], t.id in ^ids) |> Repo.all end Step 4: Views If you're using a form partial like I am, it won't magically have access to tags. You'll need to explicitly pass them like this: <%= render "form.html", changeset: @changeset, tags: @tags, action: blog_path(@conn, :create) %> Then in the form, render *all* tags, and for each one, check to see if the current changeset says it should default to checked. <%= for tag <- @tags do %> <% checked = AdamczDotCom.BlogView.is_checked(tag, @changeset) %> <%= checkbox :tags, "#{tag.id}", value: tag.name, checked: checked %> <%= tag.name %> <% end %> And that’s about it. I hope this is helpful, and please reach out to me with any corrections or suggestions. Thanks for reading, -Adam
https://www.adamcz.com/blog/ecto-2-many-to-many-associations
CC-MAIN-2022-05
refinedweb
788
66.03
The XML Instance Gamut If you happen to be in the business of writing software serving XML documents or consuming XML documents - and if you read this post, then there is a fair chance you are - then there is always one big challenge: how do you make sure your service or client is capable of dealing with all of the XML documents you could possibly expect to be passed around? And if you happen to come from the test-driven world, the answer is obviously: by testing it. However, if you try to do that, things might be harder than you expect at first. What about schemas? I clearly remember having to integrate with Google's Local Search Service. We managed to get them send us their schema, but the schema was merely illustrative, rather than normative. In fact, it didn't even 'parse' correctly. It was supposed to be a DTD, but in reality, it wasn't. In that case, you are basically lost. The only thing that you can really do is 'test by poking around', trying to see what the web service is going to reply, and then work into your test harness. If you do however manage to get a schema, then you are still not done yet. Sure, if it's about SOAP based web services, then you might be able to generate stubs and skeletons, and those stubs and skeletons would give you some guarantee that you are covering most cases. But then there is still a chance that you would not cover for all cases, since - inside your XML document - there might be alternatives for content models, and you might - when you would implement your service - only be dealing with one of them. If the schema is small, then you can probably figure it out by careful examination. However, if the schema is huge, then the range and variety of XML document instances that you might get will make that impossible. And even if you created the schema yourself, it might sometimes cover for a wider range of options than you expected. (I'm sure, I am not the only one who experienced this. ;-)) XML Instance Generator to the rescue So, back to test-driven. The good news is, there are tools that take a schema, and generate random instances, basically walking all of the different options. Xmlgen is one of those tools. It's a little bit hard to find these days. If you follow the 'XML Instance Generator' link on Kohsuke's homepage, you will end up in no-mans land. I dug a little further, and found out it's currently hosted at Sun's dev.java.net. Xmlgen is extremely simple. It takes a schema (any schema language), and will generate any number of sample documents from that. It's exactly what you want, except… It doesn't support all datatypes defined by the XML Schema Datatypes specification. And that's something I ran into more often before. In fact, I tried to use xmlgen before on a couple of occasions, and each time it broke on missing support for xs:dateTime or xs:pattern restrictions. And there doesn't seem to be an aweful lot of work going into xmlgen to fix that. Fixing XML Instance Generator So I figured I'd fix this myself. It turned out adding support for dateTime wasn't all that hard, even though xmlgen does not really have extensions points to implement, so you're basically left with a) hacking the source code big time, or b) hacking it just a little, in order to add plugpoints and then have something else implementing that plugpoint - which is what I did. Whoops, xs:pattern Adding support for xs:pattern turned out to be a little tricky. If you are new to this type of restriction, then you should know that it is about restricting content to fit a certain regular expression, as illustrated below. <simpleType name='better-us-zipcode'> <restriction base='string'> <pattern value='[0-9]{5}(-[0-9]{4})?'/> </restriction> </simpleType> Now, if you would have the desire to generate valid data for this restriction, then you should be able to generate text from that regular expression. It turns out there are quite a few Java libraries out there capable of matching text, but there nothing at all for generating text. So I implemented my own. I blogged about it here, and it is hosted here. Once that was done, extending xmlgen to have support for xs:pattern restrictions was easy. That means that - with just a few changes - I am now able to generate a test set for a fairly complicated schema. And I'm pretty sure that it will cover all cases, as long as I make the number of instance documents big enough. So, now for a restriction like this: <xsd:simpleType <xsd:restriction <xsd:pattern </xsd:restriction> </xsd:simpleType> … it will generate instances like this: - 07:36 - 10:16:26 - etc. You can download the modified version of xmlgen here. Does Databene Benerator fits your need ? There are also other tools (I didn't check them all, some are dead): Did you contact Kohsuke ? Or plan to share your dev more explicitly 🙂 Regards Bruno I absolutely will. I just wanted to get this out as quickly as possible, since we were depending on it. When search for a regular expression / data generator framework you properly never heard of mine testdata-generator. It can be used to create Proxy objects that create randomly filled test objects. But it also contains a regular expression generator, as you describe. Creating strings that match a regular expression. Another area that it is having problems with even with your update is the "language" data type. Always says unable to handle datatype when it encounters one of these. Good point. I will see when I have time to move my code to Github. When that's done, adding support for language would be a breeze. You basically would have to implement this interface: package com.sun.msv.generator; import org.relaxng.datatype.Datatype; public interface SimpleTypeGenerator { <T extends DataType> String generate(T type, ContextProviderImpl context); }
http://blog.xebia.com/the-xml-instance-gamut/
CC-MAIN-2017-30
refinedweb
1,031
62.88
In this brief snippet of fun and silliness, I am going to quickly go over how to customize the code that Visual Studio makes for you when you create a new item to add to your project. It may be that many people already know how to do this, or it may be that not many people care, but I am a self confessed fussy-pants when it comes to code aesthetics and so I'm just going to publish this, should anyone come across it and find it useful! :-) I really like Visual Studio (as a code making environment, not as a web page builder!). It has loads of great tools and tricks built in to help you make better code, faster. One of the things it does is provide code skeletons for you when you create new items, such as new web forms, new user controls and so on. Now, if you are anything like me, you may like your code formatted and ordered in a particular way. For example, I personally like curly brackets to appear on the same line as the method declaration/ if statement and so on. Also, I don't like the fact that when making a Winform, the code initially generated has comments in places where I don't want them. Perhaps more importantly than your particular stylistic preferences, your company may prefer to use a particular code template and stylistic conventions when starting a new source file. Prior to using the technique that I will talk about in a moment, the first thing I would have to do after adding a new item, would be to delete and rearrange much of the code that VS had just generated. Not exactly a huge problem as such, but it does become a tad tedious after a while. The actual solution to this little problem is actually very straightforward. As you probably guessed, Visual Studio uses templates in order to create the generated code. Given that the template files are written in plain text, all we need to do is manipulate the files until we're happy with what the template will produce. In the particular example that I'm going to go over, we will use C# as our weapon of choice and we will change the default code generated when adding a new Winform, to something more appealing (to me!). The necessary template files are stored according to the language that they correspond to, and the function the template is to carry out. For example, the template used for creating a new WinForm is stored at: C:\<installation root>\Microsoft Visual Studio .NET\VC#\VC#Wizards\CSharpAddWinFormWiz\Templates\1033\ The folder C:\<installation root>\Microsoft Visual Studio .NET\VC#\VC#Wizards contains numerous folders corresponding to all the C# wizards that Visual Studio knows of. Each wizard folder has a number of bits and bobs in it, but the thing we're interested in today sits in the Templates folder. The actual template file that we're going to change is: NewWinForm.cs. Note: Before you go poking around and changing files such as these ones, make sure you have a backup in a safe place incase you bugger something up! I tend to just change the extension of the files I'm changing to .old. Also note, that anything you do to your own installation is entirely at your own risk. If you aren't confident that you know what you're doing, then don't touch anything and go hide under your blanket until you feel better. Having said that, this isn't rocket science by any measure so just take the usual precautions and everything will be fine. Now that you've made a backup of the template file, open the file in your favorite text editor and you'll see: using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; namespace [!output SAFE_NAMESPACE_NAME] { /// <SUMMARY> /// Summary description for [!output SAFE_CLASS_NAME]. /// </SUMMARY> public class [!output SAFE_CLASS_NAME] : System.Windows.Forms.Form { /// <SUMMARY> /// Required designer variable. /// </SUMMARY> private System.ComponentModel.Container components = null; public [!output SAFE_CLASS_NAME]() { // //.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "[!output SAFE_CLASS_NAME]"; } #endregion } } One thing that you'll notice is the fact that the contents of the template file is very similar to the final result that appears in Visual Studio. In fact, given that the file type is still listed as .cs, you can open it in Visual Studio and still receive all that nice syntax highlighting just as normal. If you have ever had to do a mail merge in a word processor then you'll be able to see immediately what's going on. The code listed above is exactly the same code that VS makes for you, apart from the fact that there are various markers placed strategically throughout the code. Markers such as [!output SAFE_CLASS_NAME] are used by VS at runtime in order to plop dynamic information into the generated code on the fly. Given that the templates used by Visual Studio are so simple, it is very easy to change the template in anyway we want. As I mentioned before, I don't like the comments that are inserted by default and I prefer opening curly brackets to be placed on the same line as the method or conditional statement (e.g. if, while, for). Also, when making a new WinForm I like to place the auto generated code into a region out of my way. You or your company may have a convention by which certain information is always placed at the top of source files - for example revision details. Lets have a quick look at how that might look: /* File Created by: mushentgrumbble Date: 04/02/1866 Copyright Notice: Class Description: Notes: Revision Log - Please mark significant changes in source code in the following format: Date - Time - Reviewer - Comments 11/11/03 - 2.34pm - Rebecca White - Bug #457 Fixed - Code released to testing */ using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; namespace [!output SAFE_NAMESPACE_NAME]{ /// /// /// public class [!output SAFE_CLASS_NAME] : System.Windows.Forms.Form{ private System.ComponentModel.Container components = null; #region Private Variables #endregion #region Properties #endregion public [!output SAFE_CLASS_NAME](){ InitializeComponent(); } #region Auto-generated code private void InitializeComponent(){ this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "[!output SAFE_CLASS_NAME]"; } protected override void Dispose( bool disposing ){ if( disposing ){ if(components != null){ components.Dispose(); } } base.Dispose( disposing ); } #endregion #region Event Handlers #endregion } } As is hopefully reasonably clear, my version of the code is functionally identical although I have removed, moved and added various pieces of code to suit my preferences. The image below is how the code looks in Visual Studio. The differences are subtle but if you're anything like me, subtle things are sometimes just enough to bug you into action! So there you have it - a really easy way to change the auto-generated code that Visual Studio makes. As I understand it, you should be able to do this to the other installed languages - just poke around and see what you find. I actually didn't know that this could be done initially. One day I just got fed up with the way that VS was formatting the code and so I figured that there must be some sort of template system in action. I went poking about under the Visual C# directory and that's where I found the VC#Wizards folder. I'd be interested to know if anyone else has any similar tips regarding what goes on in Visual Studio. This is my first article for the CodeProject that I've actually published and I am a bit worried about what people will think of it. CodeProject is great and there are so many talented people sharing their knowledge that I hope my writing doesn't suck too much! However, in the event that it does, feel free to leave your comments and I'll do my best to respond to any issues that are raised. I am always open to new ideas on how to improve articles, so if anyone has any suggestions on how I can make them better then get in touch! Very first draft - first Code Project tutorial! Go easy on me guys! :-) General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/dotnet/vs_templates.aspx
crawl-002
refinedweb
1,402
63.09
<mrow>s <mi> <mn> <mn>alone <mo> <mo>elements <mtext> <mspace/> <ms> <mglyph/> action> This, &InvisiblePlus;, [Entities]. It is strongly recommended that, before reading the present chapter, one read Section 2.1 MathML Syntax and Grammar on MathML syntax and grammar, which contains important information on MathML notations and conventions. In particular, in this chapter it is assumed that the reader has an understanding of basic XML terminology described in Section 2.1.3 Children versus Arguments, and the attribute value notations and conventions described in Section 2.1.9 Summary of Presentation Elements. The content elements are the MathML elements defined in Chapter 4 Content Markup..3.2 Content Markuprow>s The elements listed in the following table as requiring 1* argument ( msqrt, mstyle, merror, menclose, mpadded, mphantom, mtd, and math) conceptually accept a single argument, but actually accept any number of children. If the number of children is 0, or is more than 1, they treat their contents as a single inferred mrow formed from all their children, and treat this mrow as the argument. Although the math element is not a presentation element, it is listed below for completeness. For example, <mtd> </mtd> is treated as if it were <mtd> <mrow> </mrow> </mtd> and is treated as if it were This feature allows MathML data not to contain (and its authors to leave out) many mrow elements that would otherwise be necessary.. In the notations familiar to most readers, both the overall layout and the textual symbols are arranged from left to right (LTR). Yet, as alluded to in the introduction, mathematics written in Hebrew, or in locales such as Morocco or Persia, the overall layout is used unchanged, but the embedded symbols (often Hebrew or Arabic) are written right to left (RTL). Moreover, in most of the Arabic speaking world, the notation is arranged entirely RTL; thus a superscript is still raised, but it follows the base on the left, rather than the right. MathML 3.0 therefore recognizes two distinct directionalities: the directionality of the text and symbols within token elements, and the overall directionality represented by Layout Schemata. These two facets are dicussed below. The overall directionality for a formula, basically the direction of the Layout Schemata, is specified by the dir attribute on the containing math element (see Section 2.2 The Top-Level math Element). The default is ltr. When dir='rtl' is used, the layout is simply the mirror image of the conventional European layout. That is, shifts up or down are unchanged, but the progression in laying out is from right to left. the container. The text directionality comes into play for the MathML token elements that can contain text ( mtext, mo, mi, mn and ms), and is determined by the Unicode properties of that text. A token element containing exclusively LTR or RTL characters is displayed straightforwardly in the given direction. When a mixture of directions is involved used, such as RTL Arabic and LTR numbers, the Unicode bidirectional algorithm [Bidi] is applied. This algorithm specifies how runs of characters with the same direction are processed and how the runs are (re)ordered. The base, or initial, direction is given by the overall directionality described above (Section 3.1.5.1 Overall Directionality of Mathematics Formulas), and affects how weakly directional characters are treated and how runs are nested. ')'. Conversely, the solidus (/ U+002F), is not marked as mirrored. Thus, an Arabic author that desires the slash to be reversed in an inline division should explicitly use reverse solidus (\ U+005C), or an alternative such as the mirroring DIVISION SLASH (U+2215). Additionally, caligraphic scripts such as Arabic blend, or connect, sequences of characters together, changing their appearance. As this can have an significant impact on readability, as well as aesthetics, it is important to apply such shaping if possible. Glyph shaping, like directionality, applies to each token element's contents individually. Please note that for the transfinite cardinals represented by Hebrew characters, the codepoints U+2135-U+2138 (ALEF SYMBOL, BET SYMBOL, GIMEL SYMBOL, DALET SYMBOL) should be used. These are strong left-to-right. So-called `displayed' formula, those appearing on a line by themselves, typically make more generous use of vertical space than inline formula which should blend into the adjacent text without intruding into neighboring lines. For example, in a displayed summation, the limits are placed above and below the summation symbol, while when it appears inline the limits would appear in the sub and superscript position. For similar reasons, sub- and superscripts, nested fractions and other constructs typically display in a smaller size than the main part of the formula. MathML implicitly associates with every presentation node a displaystyle and scriptlevel reflecting whether a more expansive vertical layout applies and the level of scripting in the current context. These values are initialized by the math element according to the display attribute. They are automatically adjusted by the various script and limit schemata elements, and the elements mfrac, and mroot, which typically set displaystyle false and increment scriptlevel for some or all of their arguments. (See the description for each element for the specific rules used.) They also may be set explicitly via the displaystyle and scriptlevel attributes on the mstyle element, or the displaystyle attribute of mtable. In all other cases, they are inherited from the node's parent. The displaystyle affects the amount of vertical space used to lay out a formula: when true, the more spacious layout of displayed equations is used, whereas when false a more compact layout of inline formula is used. This primarily affects the interpretation of the largeop and movablelimits attributes of the mo element. However, more sophisticated renderers are free to use this attribute to render more or less compactly. The main effect of scriptlevel is to control the font size. Typically, the higher the scriptlevel, the smaller the font size. (Non-visual renderers can respond to the font size in an analogous way for their medium.) Whenever the scriptlevel is changed, whether automatically or explicitly, the current font size is multiplied by the value of scriptsizemultiplier to the power of the change in scriptlevel. However, changes to the font size due to scriptlevel changes should never reduce the size below scriptminsize, to prevent scripts becoming unreadably small. The default scriptsizemultiplier is approximately the square root of 1/2, whereas scriptminsize defaults to 8 points; these values may be changed on mstyle; see Section 3.3.4 Style Change <mstyle>. Note that the scriptlevel attribute of mstyle allows arbitrary values of scriptlevel to be obtained, including negative values which result in increased font sizes. The changes to the font size due to scriptlevel should be viewed as being imposed from `outside' the node. This means that the effect of scriptlevel is applied before an explicit mathsize (See Section 3.2.2 Mathematics style attributes common to token elements) on a token child of mfrac. Thus, the mathsize effectively overrides the effect of scriptlevel. However, that change to scriptlevel changes the current font size, which affects the meaning of an "em" length (See Section 2.1.5.2 Length Valued Attributes), and so the scriptlevel still may have an effect in such cases. Note also that since mathsize is not constrained by scriptminsize, such direct changes to font size can result in scripts smaller than scriptminsize. Note that direct changes to current font size, whether by CSS or by the mathsize attribute (See Section 3.2.2 Mathematics style attributes common to token elements), have no effect on the value of scriptlevel. TEX's \displaystyle, \textstyle, \scriptstyle, and \scriptscriptstyle correspond to displaystyle and scriptlevel as "true" and "0", "false" and "0", "false" and "1", and "false" and "2", respectively. Thus, math's display="block" correponds to \displaystyle, while display="inline" correponds to \textstyle. MathML provides support for both automatic and manual (forced) linebreaking of expressions, to break excessively long expressions into several lines. All such linebreaks take place within mrow (including inferred mrow; See Section 3.1.3.1 Inferred <mrow>s), or mfenced. The breaks themselves take place at operators ( mo), and also, for backwards compatibility, at mspace. Automatic linebreaking occurs when the containing math element has overflow="linebreak" and the display engine determines that there is not enough space available to display the entire formula. The available width must therefore be known to the renderer. Like font properties, one is assumed to be inherited from the environment in which the MathML element lives. If no width can be determined, an infinite width should be assumed. Inside of a mtable, each column has some width. This width may be specified as an attribute or determined by the contents. This width should be used as the linewrapping width for linebreaking, and each entry in an mtable is linewrapped as needed. Forced linebreaks are specified by using linebreak="newline" on a mo or mspace element. Both automatic and manual linebreaking can occur within the same formula. Automatic linebreaking of subexpressions of mfrac, msqrt, mroot and menclose and the various script elements is not required. Renderers are free to ignore forced breaks within those elements if they choose. Attributes on mo and possibily on mspace elements control linebreaking and indentation of the following line. The aspects of linebreaking that can be controlled are: Where — attributes determine the desirability of a linebreak at a specific operator or space, in particular whether a break is required or inhibited. These can only be set on mo and mspace elements. (See Section 3.2.5.2.2 Linebreaking attributes) Operator Display/Position — when a linebreak occurs, determines whether the operator will appear at the end of the line, at the beginning of the next line, or in both positions; and how much vertical space should be added after the linebreak. These attributes can be set on mo elements or inherited from mstyle or math elements. (See Section 3.2.5.2.2 Linebreaking attributes) Indentation — determines the indentation of the line following a linebreak, including indenting so that the next line aligns with some point in a previous line. These attributes can be set on mo and mspace elements or inherited from mstyle or math elements. (See Section 3.2.5.2.3 Indentation attributes) One method of linebreaking that works reasonably well is sometimes referred to as a "best-fit" algorithm. It works by computing a "penalty" for each potential break point on a line. The break point with the smallest penalty is chosen and the algorithm then works on the next line. Three useful factors in a penalty calculation are: How much of the line width (after subtracting of the indent) is unused? The more unused, the higher the penalty. How deeply nested is the breakpoint in the expression tree? The expression tree's depth is roughly similar to the nesting depth of mrows. The more deeply nested the break point, the higher the penalty. If the next line is not the last line, and if the indentingstyle uses information about the linebreak point to determine how much to indent, then the amount of room left for linebreaking on the next line (ie, linebreaks that leave very little room to draw the next line result in a higher penalty). Whether "linebreak" has been specified: "nobreak" effectively sets the penalty to infinity, "badbreak" increases the penalty, "goodbreak" decreases the penalty, and "newline" effectively sets the penalty to 0. This algorithm takes time proportional to the number of tokens elements times the number of lines. Several elements and attributes of MathML are expressly designed to support fine-tuning of presentation for use-cases that wish to exert precise control of the layout and presentation of math. However, given the variability in MathML agents, the variability of the fonts available on different platforms, and particularly given the freedom given to agents to layout the mathematics according to their own requirements (See Section 3.1 Introduction), it must be pointed out that such fine-tuning can often lead to a lack of portability. Specifically, the overuse of these controls may yeild a `perfect' layout on one platform, but give much worse presentation on others. The following sections clarify the kinds of problems that can occur. <maligngroup>, <malignmark>). Consider using the mglyph element for cases such as this. If such spacing constructs are used in spite of this warning, they should be enclosed in a semantics element that also provides an additional MathML expression that can be interpreted in a standard way. See Section 5.1 Semantic Annotations for further discussion. The above warning also applies to most uses of rendering attributes to alter the meaning conveyed by an expression, with the exception of attributes on mi (such as mathvariant) used to distinguish one variable from another., mglyph and msline. The width of these elemnts depend upon their attribute values. MathML characters can be either represented directly as Unicode character data, or indirectly via numeric or character entity references. See Chapter 7 Characters, Entities and Fonts for a discussion of the advantages and disadvantages of numeric character references versus entity references, and [Entities] for a full list of the entity names available. New mathematical <maligngroup>, <malignmark> for details. Token elements (other than mspace, mglyph and msline) contains more than nine hundred Math Alphanumeric Symbol characters corresponding to letter-like symbols. These characters are in the Secondary Multilingual Plane (SMP). See [Entities] for more information. As valid Unicode data, these characters are permitted in MathML, and as tools and fonts for them become widely available, we anticipate they will be the predominant way of denoting letter-like symbols. MathML also provides an alternative encoding for these characters using only Basic Multilingual Plane (BMP) characters together with markup. MathML. This is particularly important for applications that support searching and/or equality testing. The next section discusses the mathvariant attribute in more detail, and a complete technical description of the corresponding characters is given in Section 7.5 Mathematical Alphanumeric Symbols. MathML includes four mathematics style attributes. These attributes are valid on all presentation token elements, and on no other elements except mstyle. The attributes are: 6.5 Using CSS with MathML for discussion of the interaction of MathML and CSS. Also, see [MathMLforCSS] for discussion of rendering MathML by CSS and a sample CSS style sheet. When CSS is not available, it is up to the internal style mechanism of the rendering application to visually distinguish the different logical classes. Most MathML renderers will probably want to rely on some degree to additional, internal style processing algorithms. In particular, the mathvariant attribute does not follow the CSS inheritance model; the default value is "normal" (non-slanted) for all tokens except for mi with single-character content. See Section 3.2.3 Identifier <mi> for details.., but see Section 6.5 Using CSS with MathML for caveats. Token elements also accept the attributes listed in Section 2.1.6 Attributes Shared by all MathML Elements. doesn't specify the mechanism by which style information is inherited from the rendering environment. If the requested mathsize of the current font is not available, the renderer should approximate it in the manner likely to lead to the most intelligible, highest quality rendering. Note that many MathML elements automatically change the font size in some of their children; see the discussion in Section 3.1.6 Displaystyle and Scriptlevel. The MathML 1.01 style attributes listed below are deprecated in MathML 2 and 3. These attributes were aligned to CSS, but in rendering environments that support CSS, it is preferable to use CSS directly to control the rendering properties corresponding to these attributes, rather than the attributes themselves. However as explained above, direct manipulation of these rendering properties by whatever means should usually be avoided. As a general rule, whenever there is a conflict between these deprecated attributes and the corresponding attributes (Section 3.2.2 Mathematics style attributes common to token elements), the former attributes should be ignored. The deprecated attributes are: ).: Note that the deprecated fontstyle attribute defaults in the same way as mathvariant, depending on the content. Note that for purposes of determining equivalences of Math Alphanumeric Symbol characters (See Section 7.5 Mathematical Alphanumeric Symbols and Section 3.2.1.1 Alphanumeric symbol characters) the value of the mathvariant attribute should be resolved first, including the special defaulting behavior described above. character U+2061 (which also has the entity names ⁡ and ⁡) as shown below; see also the discussion of invisible operators in Section 3.2.5 Operator, Fence, Separator or Accent <mo>.: <mn>). mn elements are typically rendered in an unslanted font.. <mn>alone Many mathematical numbers should be represented using presentation elements other than mn alone; this includes complex numbers, ratios of numbers shown as fractions, and names of numeric constants. Examples of MathML representations of such numbers include: . We will use the term "operator" in this chapter to refer to operators in this broad sense. Typical graphical renderers show all mo elements as the characters of their content, with additional spacing around the element determined by its attributes and further described below. <=. Operators, in the general sense used here,. Note also that linebreaking, as discussed in Section 3.1.7 Linebreaking of Expressions, usually takes place at operators (either before or after, depending on local conventions). Thus, mo accepts attributes to encode the desirability of breaking at a particular operator, as well as attributes describing the treatment of the operator and indentation in case the a linebreak is made at that operator. mo elements accept the attributes listed in Section 3.2.2 Mathematics style attributes common to token elements and the additional attributes listed here. Since the display of operators is so critical in mathematics, the mo element accepts a large number of attributes; these are described in the next three subsections.. The following attributes affect when a linebreak does or does not occur, and the appearance of the linebreak when it does occur. The following attributes affect indentation of the lines making up a formula. Primarily these are to control the positioning of new lines following a linebreak, whether automatic or manual. However, indentstylefirst and indentoffsetfirst also control the positioning of single line formula without any linebreaks. Formula indentation only applies to displayed equations (ie. display="block"). When these attributes appear on mo or mspace they apply if a linebreak occurs at that element. When the appear on mstyle or math elements, they determine defaults for the style to be used for any linebreaks occuring within. Note that except for cases where heavily marked-up manual linebreaking is desired, many of these attributes are most useful when bound on an mstyle or math element. Note that since the rendering context, such as available the width and current font, is not always available to the author of the MathML, a render may ignore the values of these attributes if they result in a line in which the remaining width is too small to usefully display the expression or if they result in a line in which the remaining width exceeds the available linewrapping width. The legal values of indentstyle are:) [0,1) f(x,y) Certain operators that are "invisible" in traditional mathematical notation should be represented using specific entity references within mo elements, rather than simply by nothing. The characters used for these "invisible operators" are: The MathML representations of the examples in the above table are: (U+2146) for use in an mo element representing the differential operator symbol usually denoted by "d". The reasons for explicitly using this special character are similar to those for using the special characters for invisible operators described in the preceding section. <mo>elements in front the mrow discussed above may be inferred; See Section 3.1.3.1 Inferred <mrow>s. Opening fences should have form ="prefix", and closing 5.1 Semantic Annotations), horizontal space added around an operator (or embellished operator), when it occurs in an mrow, can be directly specified by the lspace and rspace attributes. Note that lspace and rspace should be interpreted as leading and trailing space, in the case of RTL direction. By convention, operators that tend to bind tightly to their arguments have smaller values for spacing than operators that tend to bind less tightly. This convention should be followed in the operator dictionary included with a MathML renderer. math.. .1" in Section 3.1.8 Warning about fine-tuning of presentation. In some cases, text embedded in mathematics could be more appropriately represented using mo or mi elements. For example, the expression 'there exists such that f(x) <1' is equivalent to and could be represented as:. Note the warning about the legal grouping of "space-like elements" given below, and the warning about the use of such elements for "tweaking" in Section 3.1.8 Warning about fine-tuning of presentation. See also the other elements that can render as whitespace, namely mtext, mphantom, and maligngroup. In addition to the attributes listed below, mspace elements accept the attributes described in Section 3.2.2 Mathematics style attributes common to token elements, but note that mathvariant and mathcolor have no effect. mathsize only affects the interpretation of units in sizing attributes (see Section 2.1.5.2 Length Valued Attributes). Note that if both spacing and width are used, the width of the mspace is the sum of these two contributions. Linebreaking was originally specified on mspace in MathML2, but controlling linebreaking on mo is to be preferred. The value "indentingnewline" was defined in MathML2 for mspace; it is now deprecated. Its meaning is the same as newline, which is compatible with its earlier use when no other linebreaking attributes are specified. Note that linebreak values on adjacent mo and mspace elements do not interact; a "nobreak" on an mspace will not, in itself, inhibit a break on an adjacent mo element. <mspace spacing="00"/> <mspace height="3ex" depth="2ex"/> <mrow> <mi>a</mi> <mo id="firstop">+</mo> <mi>b</mi> <mspace linebreak="newline" indentto="firstop"/> <mo>+</mo> <mi>c</mi> </mrow> In the last example, mspace will cause the line to end after the "b" and the following line to be indented so that the "+" that follows will align with the "+" with id="firstop".: See also the warning about "tweaking" in Section 3.1.8 Warning about fine-tuning of presentation. <ms> The ms element is used to represent "string literals" in expressions meant to be interpreted by computer algebra systems or other systems containing "programming languages". By default, string literals are displayed surrounded by double quotes, with no extra spacing added around the string.". For example, <ms>&</ms> represents a string literal containing a single character, &, and <ms>&amp;</ms> represents a string literal containing 5 characters, the first one of which is &. The content of ms elements should be rendered with visible "escaping" of certain characters in the content, including at least the left and right quoting characters, and preferably whitespace other than individual space characters. The intent is for the viewer to see that the expression is a string literal, and to see exactly which characters form its content. For example, <ms>double quote is "</ms> might be rendered as "double quote is \"". Like all token elements, ms does trim and collapse whitespace in its content according to the rules of Section 2.1.7 Collapsing Whitespace in Input, so whitespace intended to remain in the content should be encoded as described in that section. ms elements accept the attributes listed in Section 3.2.2 Mathematics style attributes common to token elements, and additionally: <mglyph/> The mglyph element provides a mechanism for displaying images to represent non-standard symbols. It is generally used as the content of mi or mo elements where existing Unicode characters are not adequate. Unicode defines a large number of characters used in mathematics, and in most cases, glyphs representing these characters are widely available in a variety of fonts. Although these characters should meet almost all users needs, MathML recognizes that mathematics is not static and that new characters and symbols are added when convenient. Characters that become well accepted will likely be eventually incorporated by the Unicode Consortium or other standards bodies, but that is often a lengthy process. Note that the glyph's src attribute uniquely identifies the mglyph; two mglyphs with the same values for src should be considered identical by applications that must determine whether two characters/glyphs are identical. mglyph elements accept the attributes listed in Section 3.2.2 Mathematics style attributes common to token elements, but note that mathvariant and mathcolor have no effect. mathsize only affects the interpretation of units in sizing attributes (see Section 2.1.5.2 Length Valued Attributes). The background color, mathbackground, should show through if the specified image has transparency. mglyph also accepts the additional attributes listed here.raid"/></mi> </mrow> This might render as: Originally, mglyph was designed to provide access to non-standard fonts. Since this functionality was seldom implemented, nor were downloadable web fonts widely available, this use of mglyph has been deprecated. For reference, the following attribute was previously defined.. mrow elements are typically rendered visually as a horizontal row of their arguments, left to right in the order in which the arguments occur, in a context with LTR directionality, or right to left. The dir attribute can be used to specify the directionality for a specific mrow, otherwise it inherits the directionality from the context. For aural agents, the arguments would be rendered provides support for both automatic and manual linebreaking of expressions (that is, to break excessively long expressions into several lines). All such linebreaks take place within mrows, whether they are explicitly marked up in the document, or inferred (See Section 3.1.3.1 Inferred <mrow>s), although the control of linebreaking is effected through attributes on other elements (See Section 3.1.7 Linebreaking of Expressions). mrow elements accept the attributes listed in Section 2.1.6 Attributes Shared by all MathML Elements and the dir attribute as described in Section 3.1.5.1 Overall Directionality of Mathematics Formulas. . <mrow>of Section 2.3 Conformance. leading operator has an infix or prefix form (perhaps inferred), the following operator has an infix or postfix form, and the operators have the same priority in the operator dictionary (Appendix: The proper encoding of (x, y) furnishes a less obvious example of nesting mrows:> The mfrac element sets displaystyle to "false", or if it was already false increments scriptlevel by 1, within numerator and denominator. (See Section 3.1.6 Displaystyle and Scriptlevel.) mfrac elements accept the attributes listed below in addition to those listed in Section 2.1.6 Attributes Shared by all MathML Elements. Thicker lines (eg. linethickness="thick") might be used with nested fractions; a value of "0" renders without the bar such as for binomial coefficients. These cases are shown below: An example illustrating the bevelled form is show below: In a RTL directionality context, the numerator leads (on the right), the denominator follows (on the left) and the diagonal line slants upwards going from right to left. Although this format is an established convention, it is not universally followed; for situations where a forward slash is desired in a RTL context, alternative markup, such as an mo within an mrow should be used. a single argument, possibly being an inferred mrow of multiple children; see Section 3.1.3 Required Arguments. The mroot element increments scriptlevel by 2, and sets displaystyle to "false", within index, but leaves both attributes unchanged within base. The msqrt element leaves both attributes unchanged within its argument. (See Section 3.1.6 Displaystyle and Scriptlevel.) Note that in a RTL directionality, the surd begins on the right, rather than the left, along with the index in the case of mroot. msqrt and mroot elements accept the attributes listed in Section 2.1.6 Attributes Shared by all MathML Elements. a single argument, possibly being an inferred mrow of multiple children; see mathcolor, which can only be set on token elements (or on mstyle itself). There,. mstyle elements accept the attributes listed in Section 2.1.6 Attributes Shared by all MathML Elements. Additionally, mstyle can be given the following special attributes that are implicitly inherited by every MathML element as part of its rendering environment:, mathsize.. MathML2 allowed the binding of namedspaces to new values. It appears that this capability was never implemented, and is now deprecated; namedspaces are now considered constants. For backwards compatibility, the following attributes are accepted on the mstyle element, but are expected to have no effect. <merror> The merror element displays its contents as an "error message". This might be done, for example, by displaying the contents in red, flashing the contents, or changing the background color. The contents can be any expression or expression sequence. merror accepts a single argument possibly being an inferred mrow of multiple children; see 2.3. merror elements accept the attributes listed in Section 2.1 Note that the preprocessor's input is not, in this case, valid MathML, but the error message it outputs is valid MathML. <mpadded> An mpadded element renders the same as its content, but with its "bounding box" and position modified according to its attributes. It does not rescale (stretch or shrink) its content, but affects the relative position of the content with respect to surrounding elements. While the name of the element reflects the use of mpadded to add "padding", or extra space, around its content, negative "padding" can cause the content of mpadded to be rendered outside the mpadded element's bounding box; See Section 3.1.8 Warning about fine-tuning of presentation for warnings about several potential pitfalls of this effect. The mpadded element accepts a single argument possibly being an inferred mrow of multiple children; see Section 3.1.3 Required Arguments. It is suggested that audio renderers add (or shorten) time delays based on the attributes representing horizontal space ( width and lspace). mpadded elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. (The pseudo-unit syntax symbol is described below.) These attributes modify the size and position of the "bounding box" of the mpadded element. The typographical layout parameters defined by these attributes are described.1.5.2 Length Valued Attributes. If value begins with a + or - sign, it specifies an increment or decrement of the corresponding dimension by the following length value (extended as explained below). Otherwise, the corresponding dimension is set directly to the following length value. Note that signs are thus not allowed in the following length, and these attributes cannot be set directly to negative values. Length values (excluding any sign) can be specified in several formats. Each format begins with an unsigned-number, which may be followed by a % sign (effectively scaling the number) and an optional pseudo-unit, by a pseudo-unit alone, or by a units (excepting %). The possible pseudo-units are the keywords width, lspace, height, and depth; they each represent the length of the same-named dimension of the mpadded element's content (not of the mpadded element itself). For any of these length formats, the resulting length is the product of the number (possibly including the %) and the following pseudo-unit, units, namedspace or the default value for the attribute if no such unit or space is given.. The size of the bounding box and the relative location of the positioning point for the mpadded element are defined by its size and positioning attributes. The argument of the mpadded element is always rendered with its natural positioning point coinciding with the positioning point of the mpadded elements. Thus, by using the size and position attributes of mpadded to expand or shrink its bounding box, the visual effect is to pad the child content or the move the content so that it overlaps neighboring elements. The width attribute refers to the horizontal width of the natural visual bounding box of the mpadded element's content. Decreasing the width causes following content to be rendered closer to the positioning point than would normally have occurred; setting the width to 0 causes it to completely overlap the argument. any content above it to be rendered lower than normal, possibly overlapping. a single argument possibly being an inferred mrow of multiple children; see Section 3.1.3 Required Arguments.. mphantom elements accept the attributes listed in Section 2.1.6 Attributes Shared by all MathML Elements. There is one situation where the preceding rules and <mfenced> <mi>x</mi> <mi>y</mi> </mfenced> renders as "(x, y)" and is equivalent to. mfenced elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. A generic mfenced element, with all attributes explicit, looks as follows: <mfenced open="opening-fence" close="closing-fence" separators="sep#1 sep#2 ... sep#(n-1)" > arg#1 ... arg#n </mfenced> In a RTL directionality context, since the initial text direction is RTL, characters in the open and attributes that have a mirroring counterpart will be rendered in that mirrored form. In particular, the default values will render correctly as a parenthesized sequence in both LTR and RTL contexts.. a single argument possibly being an inferred mrow of multiple children; see Section 3.1.3 Required Arguments. menclose elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. The values allowed for notation are open-ended. Conforming renderers may ignore any value they do not handle, although renderers are encouraged to render as many of the values listed below as possible.. A similar result can be achieved with the value "top right". The case of notation="radical" is equivalent to the msqrt schema. The values "box", "roundedbox", and "circle" should enclose the contents as indicated by the values. The amount of distance between the box, roundedbox, or circle, and the contents are not specified by MathML, and is left to the renderer. In practice, paddings on each side of 0.4em in the horizontal direction and .5ex in the vertical direction seem to work well. The values "left", "right", "top" and "bottom" should result in lines drawn on those sides of the contents. The values "updiagonalstrike", "downdiagonalstrike", "verticalstrike" and "horizontalstrike" should result in the indicated strikeout lines being superimposed over the content of the menclose, e.g. a strikeout that extends from the lower left corner to the upper right corner of the menclose element for "updiagonalstrike", etc. The value "madruwb" should generate an enclosure representing an Arabic factorial (`madruwb' is the transliteration of the Arabic مضروب for factorial). This is shown in the third example below. The baseline of an menclose element is the baseline of its child (which might be an implied mrow). in Section 3.1.6 Displaystyle and Scriptlevel.. Note that ordinary scripts follow the base (on the right in LTR context, but on the left in RTL context); prescripts precede the base (on the left (right) in LTR (RTL) context). Because presentation elements should be used to describe the abstract notational structure of expressions, it is important that the base expression in all "scripting" elements (i.e. the first argument expression) should be the entire expression that is being scripted, not just the trailing character. For example, (x+y)2 should be written as: <msub> The msub element attaches a subscript to a base using the syntax <msub> base subscript </msub> It increments scriptlevel by 1, and sets displaystyle to "false", within subscript, but leaves both attributes unchanged within base. (see Section 3.1.6 Displaystyle and Scriptlevel.) msub elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. <msup> The msup element attaches a superscript to a base using the syntax <msup> base superscript </msup> It increments scriptlevel by 1, and sets displaystyle to "false", within superscript, but leaves both attributes unchanged within base. (see Section 3.1.6 Displaystyle and Scriptlevel.) msup elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. <msubsup> The msubsup element is used to attach both a subscript and superscript to a base expression. <msubsup> base subscript superscript </msubsup> It increments scriptlevel by 1, and sets displaystyle to "false", within subscript and superscript, but leaves both attributes unchanged within base. (see Section 3.1.6 Displaystyle and Scriptlevel.) Note that both scripts are positioned tight against the base as shown here versus the staggered positioning of nested scripts as shown here ; the latter can be achieved by nesting an msub inside an msup. msubsup elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements.under> The munder element attaches an accent or limit placed under a base using the syntax <munder> base underscript </munder> It always sets displaystyle to "false" within the underscript, but increments scriptlevel by 1 only when accentunder. munder elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements.. <mover> The mover element attaches an accent or limit placed over a base using the syntax <mover> base overscript </mover> It always sets displaystyle to "false" within overscript, but increments scriptlevel by 1 only when accent is "false". Within base, it always leaves both attributes unchanged. (see Section 3.1.6 Displaystyle and Scriptlevel.) If. mover elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. The difference between an accent versus limit is shown here:. <munderover> The munderover element attaches accents or limits placed both over and under a base using the syntax <munderover> base underscript overscript </munderover> It always sets displaystyle to "false" within underscript and overscript, but increments scriptlevel by 1 only when accentunder or accent, respectively, and overscript are drawn in a subscript and superscript position, respectively. In this case, the accentunder and accent attributes are ignored. This is often used for limits on symbols such as ∑. munderover elements accept the attributes listed below in addition to those specified in Section 2.1 defaults for accent and accentunder are computed in the same way as for munder and mover, respectively. <mmultiscripts> Presubscripts and tensor notations are represented by a single element, mmultiscripts, using the syntax: <mmultiscripts> base (subscript superscript)* [ <mprescripts/> (presubscript presuperscript)* ] < the same order as the directional context (ie. left-to-right order in LTR context).. See Section 3.4.3.2 Attributes. The mmultiscripts element increments scriptlevel by 1, and sets displaystyle to "false", within each of its arguments except base, but leaves both attributes unchanged within base. (see Section 3.1.6 Displaystyle and Scriptlevel.). While the two-dimensional layouts used for elementary math such as addition and multiplication are somewhat similar to tables, they differ in important ways. For layout and for accessibility reasons, the mstack and mlongdiv elements discussed in Section 3.6 Elementary Math should be used for elementary math notations. In addition to the table elements mentioned above, the mlabeledtr element is used for labeling rows of a table. This is useful was allowed to `infer' mtr elements around its arguments, and the mtr element could infer mtd elements. This behaviour is deprecated.) Table rows that have fewer columns than other rows of the same table (whether the other rows precede or follow them) are effectively padded on the right (or left in RTL context). MathML. mtable elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. In the above specifications for attributes affecting rows (respectively, columns, or the gaps between rows or columns), the notation (...)+ means that multiple values can be given for the attribute as a space separated list (see Section 2.1.5 MathML Attribute Values). In this context, a single value specifies the value to be used for all rows (resp., columns or gaps). A list of values are taken to apply to corresponding rows (resp., columns or gaps) starting from the top (resp., left or gap after the first row or column). If there are more rows (resp., columns or gaps) than supplied values, the last value is repeated as needed. If there are too many values supplied, the excess are ignored. Note that none of the spaces occupied by lines frame, rowlines and columnlines, nor the spacing framespacing, rowspacing or columnspacing, nor the label in mlabeledtr are counted as rows or columns. in a LTR context or rightmost column in a RTL context. As described in Section 3.5.1 Table or Matrix <mtable>, mtr elements are effectively padded on the right with mtd elements when they are shorter than other rows in a table. mtr elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. . This should be rendered as: <mtd> An mtd element represents one entry, or cell, in a table or matrix. An mtd element is only allowed as a direct sub-expression of an mtr or an mlabeledtr element. The mtd element accepts a single argument possibly being an inferred mrow of multiple children; see Section 3.1.3 Required Arguments. mtd elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements.. <maligngroup>, <malignmark> (directly or indirectly) of the following types (which are themselves contained in the table cell): an mrow element, including an inferred mrow such as the one formed by a multi-child Mixing Markup Languages for further details about mixing presentation and content markup.). The rationale for this is that it would be inconvenient to have to remove all unnecessary malignmark elements from automatically generated data, in certain cases, such as when they are used to specify alignment on "decimal points" other than the '.' character. <malignmark>Attributes malignmark elements accept the attributes listed below in addition to those specified in Section 2.1>Attributes maligngroup elements accept the attributes listed below in addition to those specified in Section 2.1, menclose,. For example, if an mtable element specifies values for groupalign and a maligngroup element within the table also specifies an explicit groupalign value, then then the value from the maligngroup takes priority..1 is divided into 2n horizontally. Each alignment group is then shifted horizontally as a block to unique position that places: in the section called L(i) that part of the ith group to the left of its alignment point; in the section called R(i) that part of the ith group to the right of its alignment point.. Mathematics used in the lower grades such as two-dimensional addition, multiplication, and long division tends to be tabular in nature. However, the specific notations used varies among countries much more than for higher level math. Furthermore, elementary math often presents examples in some intermediate state and MathML must be able to capture these intermediate or intentionally missing partial forms. Indeed, these constructs represent memory aids or procedural guides, as much as they represent `mathematics'. The elements used for basic alignments in elementary math are: mstack, for aligning rows of digits and operators; msgroup, for grouping rows with similar alignment; msrow, for grouping digits and operators into a row; and msline, for drawing lines between the rows of the stack. Carries are supported by mscarry, with mscarries used for associating a set of carries with a row. Long division, mlongdiv, composes an mstack with a divisor and quotient. mstack and mlongdiv are the parent elements for all elementary math layout. Since the primary use of these stacking constructs is to stack rows of numbers aligned on their digits, and since numbers are always formatted left-to-right, the columns of an mstack are always processed left-to-right; the overall directionality in effect (ie. the dir attribute) does not affect to the ordering of display of columns or carries in rows and, in particular, does not affect the ordering of any operators within a row (See Section 3.1.5 Directionality). These elements are described in this section followed by examples of their use. In addition to two-dimensional addition, subtraction, multiplication, and long division, these elements can be used to represent several notations used for repeating decimals. A very simple example of two-dimensional addition is shown below: The MathML for this is: Many more examples are given in Section 3.6.8 Elementary Math Examples. <mstack> mstack is used to lay out rows of numbers that are aligned on each digit. This is common in many elementary math notations such as 2D addition, subtraction, and multiplication.. Each row contains `digits' that are placed into columns. (see Section 3.6.4 Rows in Elementary Math <msrow> for further details). The stackalign attribute together with the position and shift attributes of msgroup, mscarries, and msrow determine to which column a character belongs. The width of a column is the maximum of the widths of each `digit' in that column — carries do not participate in the width calculation; they are treated as having zero width. If an element is too wide to fit into a column, it overflows into the adjacent column(s) as determined by the charalign attribute. If there is no character in a column, its width is taken to be the width of a 0 in the current language (in many fonts, all digits have the same width). The method for laying out an mstack is: The `digits' in a row are determined. All of the digits in a row are initially aligned according to the stackalign value. Each row is positioned relative to that alignment based on the position attribute (if any) that controls that controls that row. The maximumn width of the digits in a column are determined and shorter and wider entries in that column are aligned according to the charalign attribute. The width and height of the mstack element are computed based on the rows and columns. Any overflow from a column is not used as part of that computation. The baseline of the mstack element is determined by the align attribute. mstack elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. <mlongdiv> Long division notation varies quite a bit around the world, although the heart of the notation is often similar. mlongdiv is similar to mstack and used to layout long division. The first two children of mlongdiv are the result of the division and the divisor. The remaining children are treated as if they were children of mstack. The placement of these and the lines and separators used to display long division are controlled by the longdivstyle attribute. In the remainder of this section on elementary math, anything that is said about mstack applies to mlongdiv unless stated otherwise. mlongdiv elements accept all of the attributes that mstack elements accept (including those specified in Section 2.1.6 Attributes Shared by all MathML Elements), along with the attribute listed below. The values allowed for longdivstyle are open-ended. Conforming renderers may ignore any value they do not handle, although renderers are encouraged to render as many of the values listed below as possible. See Section 3.6.8.3 Long Division for examples of how these notations are drawn. The values listed above are used for long division notations in different countries around the world: <msgroup> msgroup is used to group rows inside of the mstack element that have a similar position relative to the alignment of stack. Any children besides msrow, msgroup, mscarries and msline are treated as if implicitly surrounded by an msrow (See Section 3.6.4 Rows in Elementary Math <msrow> for more details about rows). msgroup elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. If both position and shift are set to "0", then msgroup has no effect. <msrow> An msrow represents a row in an mstack. In most cases it is implied by the context, but is useful explicitly for putting multiple elements in a single row, such as when placing an operator "+" or "-" along side a number within an addition or subtraction. If an mn element is a child of msrow (whether implicit or not), then the number is split into its digits and the digits are placed into successive columns. Any other element, with the exception of mstyle is treated effectively as a single digit occupying the next column. An mstyle is treated as if its children were the directly the children of the msrow, but with their style affected by the attributes of the mstyle. The empty element none may be used to create an empty column. Note that a row is considered primarily as if it were a number, which are always displayed left-to-right, and so the directionality used to display the columns is always left-to-right; textual bidirectionality within token elements (other than mn) still applies, as does the overall directionality within any children of the msrow (which end up treated as single digits); see Section 3.1.5 Directionality. msrow elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. <mscarries> mscarries is used for the various annotations such as carries, borrows, and crossouts that occur in elementary math. The children are associated with the element in the same column in the following row of the mstack, although this correspondence can be adjusted by position. Additionally, since these annotations are used to adorn what are treated as numbers, the attachment of carries to columns proceeds from left-to-right; The overall directionality does not apply to the ordering of the carries, although it may apply to the contents of each carry; see Section 3.1.5 Directionality. Each child of mscarries other than mscarry or none is treated as if implicitly surrounded by mscarry; the element none is used when no carry for a particular column is needed. mscarries increments scriptlevel, so the children are typically displayed in a smaller font. It also changes scriptsizemultiplier from the inherited value; scriptsizemultiplier can be set on the mscarries element. mscarries elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. <mscarry> mscarry is used inside of mscarries to represent the carry for an individual column. A carry is treated as if its width were zero; it does not participate in the calculation of the width of its corresponding column; as such, it may extend beyond the column boundaries. Although it is usually implied, the element may be used explicitly to override the location and/or crossout attributes of the containing mscarries. It may also be useful with none as its content in order to display no actual carry, but still enable a crossout due to the enclosing mscarries to be drawn for the given column. mscarries elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. <msline/> msline draws a horizontal line inside of a mstack element. The position, length, and thickness of the line are specified as attributes. msline elements accept the attributes listed below in addition to those specified in Section 2.1.6 Attributes Shared by all MathML Elements. Two-dimensional addition, subtraction, and multiplication typically involve numbers, carrries/borrows, lines, and the sign of the operation. Notice that the msline spans all of the columns and that none is used to make the "+" appear to the left of all of the operands. The MathML for this is:. Because the default alignment is placed to the right of number, the numbers align properly and none of the rows need to be shifted. The following two examples illustrate the use of mscarries, mscarry and using none to fill in a column. The examples illustrate two different ways of displaying a borrow. The MathML for the first example is: The MathML for the second example uses mscarry because a crossout should only happen on a single column: Here is an example of subtraction where there is a borrow with multiple digits in a single column and a cross out. The borrowed amount is underlined (the example is from a Swedish source): There are two things to notice. The first is that menclose is used in the carry and that none is used for the empty element so that mscarry can be used to create a crossout. Below is a simple multiplication example that illustrates the use of msgroup and the shift attribute. The first msgroup does nothing. The second msgroup could also be removed, but msrow would be needed for its second and third children. They would set the position or shift attributes, or would add none elements. This example has multiple rows of carries. It also (somewhat artificially) includes commas (",") as digit separators. The encoding includes these separators in the spacing attribute value, along non-ASCII values. The notation used for long division varies considerably among countries. Most notations share the common characteristics of aligning intermediate results and drawing lines for the operands to be subtracted. Minus signs are sometimes shown for the intermediate calculations, and sometimes they are not. The line that is drawn varies in length depending upon the notation. The most apparently difference among the notations is that the position of the divisor varies, as does the location of the quotient, remainder, and intermediate terms. The layout used is controlled by the longdivstyle attribute. Below are examples for the values listed in Section 3.6.2.2 Attributes The MathML for the first example is: With the exception of the last example, the encodings for the other examples are the same except that the values for longdivstyle differ and that a "," is used instead of a "." for the decimal point. For the last example, the only difference from the other examples besides a different value for longdivstyle is that Arabic numerals have been used in place of Latin numerals. Decimal numbers that have digits that repeat infinitely such as 1/3 (.3333...) are represented using several notations. One common notation is to put a horizontal line over the digits that repeat (in Portugal an underline is used). Another notation involves putting dots over the digits that repeat. These notations are shown below: The MathML for these involves using mstack, msrow, and msline in a straightforward manner. The MathML for the preceeding examples above is given below. <mstack> <msline length="1"/> <mn> 0.3333 </mn> </mstack> <mstack> <msline length="6"/> <mn> 0.142857 </mn> </mstack> <mstack> <mn> 0.142857 </mn> <msline length="6"/> </mstack> <mstack> <msrow> <mo>.</mo> <none/><none/><none/><none/> <mo>.</mo> </msrow> <mn> 0.142857 </mn> </mstack> <maction> To provide a mechanism for binding actions to expressions, MathML provides the maction element. This element accepts any number of sub-expressions as arguments and the type of action that should happen is controlled by the actiontype attribute. Only three actions are predefined by MathML, but the list of possible actions is open. Additional predefined actions may be added in future versions of MathML. Linking to other elements, either locally within the math element or to some URL, is not handled by maction. Instead, it is handled by adding a link directly on a MathML element as specified in Section 6.4.1 Mixing MathML and HTML. maction elements accept the attributes listed below in addition to those specified in Section 2.1 2.3.2 Handling of Errors. If a MathML application responds to a user command to copy a MathML sub-expression to the environment's "clipboard" (see Section 6.3 Transferring MathML), any maction elements present in what is copied should be given selection values that correspond to their selection state in the MathML rendering at the time of the copy command. The meanings of the various actiontype values is given below. Note that not all renderers support all of the actiontype values, and that the allowed values are open-ended. selectionvalue, wrapping back to 1 when it reaches the last child.. mtextelement in most circumstances. For non- mtextmessages, renderers might provide a natural language translation of the markup, but this is not required. mtextelement in most circumstances. For non- mtextmessages, renderers may provide a natural language translation of the markup if full MathML rendering is not practical, but this is not required. mactionis replaced by what is entered, pasted, etc. MathML does not restrict what is allowed as input, nor does it require an editor to allow arbitrary input. Some renderers/editors may restrict the input to simple (linear) text. The actiontype values are open-ended. If another value is given and it requires additional attributes, the attributes must be in a different namespace. This is shown below: my:colorattributes might change the color of the characters in the presentation, while the my:backgroundattribute might change the color of the background behind the characters. MathML uses the semantics element to allow specifying semantic annotations to presentation MathML elements; these can be content MathML or other notations. As such, semantics should be considered part of both presentation MathML and content MathML. All MathML processors should process the semantics element, even if they only process one of those subsets. In semantic annotations a presentation MathML expression is typically the first child of the semantics element. However, it can also be given inside of an annotation-xml element inside the semantics element. If it is part of an annotation-xml element, then encoding="MathML-presentation" must be used and presentation MathML processors should use this value for the presentation. See Section 5.1 Semantic Annotations for more details about the semantics and annotation-xml elements.
http://www.w3.org/TR/2009/WD-MathML3-20090604/chapter3.xml
CC-MAIN-2016-26
refinedweb
9,706
54.42
Learning the JAX-RS basics: How to write a simple REST interface for accessing System properties with JAX-RS (Java API for RESTful Services). Why yet another JAX-RS basics tutorial? Just after Labour Day Labor Day last year, I was debugging a problem involving JAX-RS in the Liberty profile. I wasn’t even sure I knew what JAX-RS stood for. Still, to debug the problem I needed a sample JAX-RS application. Fast. I decided this as an opportunity to learn how to code to JAX-RS (which, I learned, stands for Java API for RESTful Services). But while everything I read about JAX-RS was probably 100% accurate, it suffered from having been written by someone who understood JAX-RS. Sure, tutorials might be out there, but I didn’t find a helpful one. About the sample application Any good tutorial needs a sample application. For simplicity, and to focus just on the JAX-RS-ness, I decided to write a simple REST interface for accessing system properties. I decided I would simply expose the Java system properties using REST: System/properties This should return a JSON response like this: { "property1" : "propertyValue", "property2" : "propertyValue" } And the individual properties to be available from: System/property/<propertyName> which should return a JSON response like this: "<propertyName>" : "propertyValue" Building the application Here’s how I built the application. I used the Eclipse IDE for Java EE, WDT 8.5.5 and Liberty profile 8.5.5 for this; I’m going to assume you have some basic knowledge of these tools. Step 1 – Creating the Web project - Create a new Web project based on the REST Services template, then click Next. - On the Deployment page, configure the project to use an existing Liberty profile installation and clear the add it to an EAR check box. - On the REST Services page, disable the JAX-RS option and clear the Update Deployment Descriptor check box. This gets rid of the errors so that you can click Finish. Step 2 – Creating the JAX-RS Java Bean - JAX-RS is based on the idea of annotated Java classes, so create a Java class with the name net.wasdev.jaxrs.SystemProperties. - To make this into a JAX-RS bean, annotate it using the javax.ws.rs.Pathannotation like this: @Path("System/properties") public class SystemProperties { } This says “this class is a resource and can be accessed from the ‘System/properties’ URL”. This isn’t very useful though because we can’t do anything with the resource. With REST, read access is mapped to the http GET method, so in order to provide data you provide a getter method annotated with the javax.ws.rs.GETannotation. - Create a getSystemPropertiesmethod like this one: @GET public Map<String, String> getSystemProperties() { Map<String, String> result = new HashMap<>(); Properties props = System.getProperties(); for (Map.Entry<Object, Object> entry : props.entrySet()) { result.put((String)entry.getKey(), (String)entry.getValue()); } return result; } Step 3 – Configuring the web.xml I have to admit, it surprised me to find out that defining the @Path annotation isn’t enough. You also need to do one of two things: - Add some magic to the web.xml - Create a class that extends javax.ws.rs.Application Although web.xml is a little out of vogue these days, I chose this option because it meant that I didn’t need to define a custom JAX-RS application. All that would have done is specify the same metadata as I could add to the web.xml file. To edit the web.xml: - Right-click the project then select Java EE Tools > Generate Deployment Descriptor Stub. - Add the following XML to the WebContext/WEB-INF/web.xml immediately before the element: <servlet> <servlet-name>javax.ws.rs.core.Application</servlet-name> </servlet> <servlet-mapping> <servlet-name>javax.ws.rs.core.Application</servlet-name> <url-pattern>/rest/*</url-pattern> </servlet-mapping> Step 4 – Running the application (and fixing it) - Right-click the project then click Run As > Run on server. - Open a web browser at. You’ll see the following console text: [ERROR ] The system could not find a javax.ws.rs.ext.MessageBodyWriter or a DataSourceProvider class for the java.util.HashMap type and text/html mediaType. Ensure that a javax.ws.rs.ext.MessageBodyWriter exists in the JAX-RS application for the type and media type specified. This says it couldn’t convert the map to text/html. - To indicate that you want to use JSON, add another annotation. Add the following code to the method to get JSON serialization: @GET @Produces(MediaType.APPLICATION_JSON) public Map<String, String> getSystemProperties() { The javax.ws.rs.Producesannotation specifies what type to convert the data to. - This change isn’t reflected automatically so switch to the Servers view, right-click the application, and select Restart. Welcome to your first working JAX-RS application. Edit to my last comment: For Step 3, second bullet, after removing web.xml: add the following class. You can choose your own application path value. Step 2, second bullet, after removing web.xml, add this class: customizate your application path value. package p; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath(“/mypath”) public class MyApplication extends Application { } Hi Kevin, what incorrect content does the wizard insert? Perhaps there is a bug in here that needs to be addressed? This tutorial suggests two ways to get your JAX-RS service up and running. In step 3 above, second bullet point, by using the Application class, it does not necessarily require you to use web.xml. The wizard provides the option to generate the web.xml or not. You can create the web.xml after-the-fact via an action in the Java EE Tools context menu from the project. Thank you! Step 3 – Configuring the web.xml <<< This step is the KEY! For the life in me, I couldn't get my REST services up simply because stupid eclipse doesn't create a web.xml file for you by default. And when you use the JAX-RS wizard to update the web.xml, it incorrectly inserts stuff. it would be useful to also show how to code REST client in Liberty Thanks for the useful article – I’ve just got my first REST service working!
https://developer.ibm.com/wasdev/docs/jax-rs-basics/
CC-MAIN-2017-34
refinedweb
1,050
58.99
The following forum(s) have migrated to Microsoft Q&A: All English System Center 2012 Configuration Manager forums! Visit Microsoft Q&A to post new questions. Hello, we have sccm 2007 and we have just installed sccm 2012, so i upgraded distribution points to sccm 2012. i can upgrade successfully most of distribution points. but one of them couldnt upgraded. so i check the distmgr.log file i saw an error that : "CWmi::Connect() failed to connect to \\SrvrDistribution\root\default. Error = 0x800706BA" i checked the following : - i have permission (i added sccm2012 machine account to administrators group) - there isnt any local or network firewall between sccm2012 and distribution point machine - i can access distribution point machine by using dns. there is dns record in dns. so i searched on internet about error 0x800706BA. it means rpc server unavailable. but i checked it services...bits,wmi,rpc,com+ services are running. i rebooted the server but there isnt any change. how can i fix it ? note : i suspect the machine name "SrvrDistribution". maybe there is a bug about length of machine name. it has 16 character. is it a problem ? extra information : distribution point machine is windows server 2003 standart edition service pack 2. i installed and configured iis manually. also i installed rollup for sccm 2012. These errors even showed up for successful upgrades I've done. Does it actually stop or continue on with the upgrade? From the primary: 1. Start - Run 2. wbemtest 3. Connect 4. Enter the namespace as: \\SrvrDistribution\root\default Does it connect? Hi Guys, Anyone sort this out? I am experiencing the exact same issue.. Currently have 1 Primary and 1 Secondary Site with 13DP's that I have to migrate.. Question 1. Is there a way to bulk reassign distribution point? Yes i have upgraded to R2.. Question 2. I manually reassigned 1 of the DP's and received the error 'failed to update binaries'. I have however been able to connect via wbemtest. I can still see in programs that SCCM 2007 is still installed? In 2012 console, I can see it as a Distribution Point.. What the... Thanks
https://social.technet.microsoft.com/Forums/en-US/48118e8e-e1f5-4062-9c35-37f149cd531d/failed-to-update-binaries-distribution-point-upgrade-sccm-2007-to-sccm-2012?forum=configmanagermigration
CC-MAIN-2020-50
refinedweb
358
61.83
- Argv - Design of singleton sub-type? - how to check write failure in ofstream?? - reinterpret_cast ? bad? good? - Ambig - searching, creating new file from old using c++ - virtual binding surviving stream transport of objects - Another way of adding to a string (or output iterator)? - template specialization - recursive control path issue. - Defining globals inside structs - best practices - Strip String in 2 blocks - Using this objdct - Question about std:string and std:cout -? - implementing constructors given class declarations - is assigning a substr to the parent str OK? - counting nesting level in template classes - Set a pointer to null when deleting? - where to find a complete C++ library reference? - Question on static - Free C online Test - Mutithread monitoring - Signal handling in VC++ - How Can I link a .cp file in the "main" header? - How to extract data of a Blitz++ Array ? - how to concatenate the string - how to concatenate the string - logic for converting data obtained from input - static member variable in a DLL... - C/C++ Hardware modelling - tower of hanoi - tower of hanoi - Convert int to *char. - Bad Access - Memory Problem? - COM in PHP - Free source code diagramming programs - Name + number - Template files and compiling them - Efficient searching through stl map? - classobject->name(); - find_if algorithm on multimaps - the use of the :: operator - Debugging Help - Could a struct with size 44 bytes point always points to a char array with size 2024 bytes? - COM question - newbie - Microsoft Visual C++ 2005 Express Edition - matrix 2x2 - Free online test in C, CPP / Placement papers / CPP,C Interview Questions - complex <double> as return type or parameter - C++ compiler "return" behavior (guru question ;) - keywords "export" - Error when overloading << in template - Can I pass a type name to a function? - few topics to refer back - how to serialise objects in c++ - Need help on template - Program Required - "Timeout while waiting for connection" - updating a file at the same time - Unnamed namespace predicament... - overload on return type - Which design pattern to choose in this situation? - Standalone Executables - how to sendmail - Newbie Homework Help Program 2 - global array defined by parameters passed. protoyping in header? - Catching return value in a const reference needed? - How to generate 52 different random number?? - Template class inheritance problem - Best book for C++ progrmming in web! -) - Mock objects and testing - Using Inheritance -- clarification needed ? - Boost Training - St. Louis - I can't regain control of my form - read registry keys - Changing access specifier for virtual function - While Loop Question - mov or avi transparency - Accessing private member of a class through type-casting - pgm without std library functions - How to hide subclasses? - Is it a good idea to gain experience in both Java & .NET in the Indian software services industry? - remove index - "C" Callbacks: Static method vs. extern "C" - Reading a matrix file and storing it in array - Intel c/c++ on linux system. - str().c_str() question - Vector using shared memory - is this true? sort algo on STL Lists - parsing c++ declarations - casting a struct to a class - How to generate warnings when How generate a warning when int is converted to bool or vice versa? - biblioteka do gif/png - Invoking member functions of objects - What is the defined behavior of past bound iterator - static variable:declare and define - Operator override when using STL - Anyone know how to get the files in one directory? - New Joiners : Pls. Read this - throwing exception from constructor . - overload of operator= - wrong about fstream file(s.c_str(),std::ios_base::in | std::ios_base::app); - wrong about fstream file(s.c_str(),std::ios_base::in | std::ios_base::app); - GUI Issues - [lib] pgp/gpg - Can abstract base class have V-table?, Will the pointer to virtual destructor be entered into the virtual table? - Floating Point Accuracy - Comparing the values of two vectors - Inheriting only interface - factory method - C++ Function Signature string Parser - Is it usful to write a class like "Do_When_Return" so I can make it easy to delete some objects when I exit from a code block? - using pointers to map to data - fstream and number conversions - Model View Controller? - Destroying STL Strings - template accessors for class? - topmole.com: Best of the best hardware and software lists - C++ Questions - GUI in C++ - Using parenthesis with defined (#if defined(...)) - find length of unsigned char *input? - abstract classes and virtual deconstructors assistance - QuantLib or more general C++ - non-aggregate type error assistance needed - newbie question on struct - Question on static attributes in inherited classes - Copy Constructor segmentation fault - n x n matrix transposition recursive algorithm. - Access problem - strange struct behaviour - FREE WEBSITE plus 10EMAIL ADDRESSES - how to define/initialise static template class data members of classtype - Construtors - Collecting different execution statistics of C++ programs - code dl - passing class pointer to other objects - c++ callback functor question - int representation incorrect - About partially specify template member function - improve c++ by exercises/projects - is anyone here who made text vesion tetris? - Linear search - Is this legal (templates) - boost::weak_ptr and shared_ptr pointers from "this" - Inputting text from a text file into a 2 -D array - include .h files in .h - Separating scope of try block? - Fastest way to read from a file into a vector<unsigned char> - using placement new to re-initialize part of an object - using placement new to re-initialize part of an object - using placement new to re-initialize part of an object - Macro to iteratively generate variable names - Opportunity available in Plainboro NJ - Free Graphics Libs For Visual C++ Toolkit 2003 - a dumb question - C++ certification free materials. - signals - more help please - Can a class have a non-static const array as a data member? - 3rd party tool - abt void pointer - Non Recursive In-Order Traversal without using stack - What's the tilde in a &= ~b ? - error C2296: '.*' : illegal, left operand has type 'cStepCurveEvaluator *const ' - Why is the behavior different - why is wcschr so slow??? - C++ standards - SHFileOperation Help - variation with repetitions in C++ - How to implement an async-function? - C linkage problem with ACE on windows - Undeclared identifier - ifsteam - Hash Table Implementation in C++ - Tips for speeding up development - memcpy() question in C++ - Greta and basic_string<char, ignore_case_traits> - Design pattern question - Java Interview questions and answers - getch() and getche() - Code fails under vc8 - clearing a structure - how to check the reference of C++on Linux - Today I buy a new disk of computer but it run not fast - templates question - executing code in a String! - plottting graphs in C++ - STL inherit from container<T>::iterator - windows.h - Problem finding key in STL map with gcc 4.1.0 - system and user time - Multiple dispatch - can operator << accept two parameters? - design pattern .. factory i suspect - In-place function - In-place function - Difference between including a header file in .h and .cpp - factory design question.. - help with recursion please - Q: stl, boost smart pointer to prevent memory leaking - Newbie with an error - String Parser using BOOST.Spirit - Command Line Argument with & and ^ - Constness of the container of shared_ptr and the constness of it elements - ring iterator adaptor for vector interator - copy elements from a pq to a vector - operator new() and new[] - design pattern .. factory i suspect - CGI C++ problem - tournament tree implementation - Obscure Syntax - Name conflict with windows.h define - size of a function - Life of temporaries - Newbie C++ - Double Bubble Sort Algorithm Fails to Sort AT ALL - how to make bool class with custom output? - How to amend this code? - Where I can find itoa()? - swap pointers - when to use private inheritance? - GUI compatibility in C++ - struct reference? - Question on class private member - 'New' operator - need to call mfc application from C program - Manipulating Bitmaps - file problem - reading file and copying it into array... - Getting C++ data into an excel graph - How to store words? - How to avoid the use of copy constructor when using STL container function (push_back, etc.) - How to write game such like this? - private within this context ... more - Flow chart for C code
http://www.velocityreviews.com/forums/archive/f-39-p-100.html
CC-MAIN-2013-20
refinedweb
1,301
54.52
Swansea City A.F.C. Swansea City Association Football Club (/ˈswɒnzi/; Welsh: Clwb Pêl-droed Cymdeithas Dinas Abertawe) club was founded in 1912 as Swansea Town and entered into the Southern League, winning the Welsh Cup in their debut season. They were admitted into the Football League in 1920 and won the Third Division South title in 1924–25. They again won the Third Division South title in 1948–49, having been relegated two years previously. They fell into the Fourth Division after relegations in 1965 and 1967. The club changed their name to Swansea City in 1969 to reflect Swansea's new status as a city.[3] They were promoted at the end of the 1969–70 season, though were relegated again in 1975. The club won three promotions in four seasons to reach the First Division in 1981. It was during the following season they came close to winning the league title, but a decline then set in near the season's end, before they finished sixth, still a club record. It was from here the club suffered a relegation the season after, returning to the Fourth Division by 1986 and then narrowly avoiding relegation to the Conference in 2003. The Swansea City Supporters Society Ltd owns 20% of the club,[4] with their involvement hailed by Supporters Direct as "the most high profile example of the involvement of a supporters' trust in the direct running of a club".[5] The club's subsequent climb from the fourth division of English football to the top division is chronicled in the 2014 film, Jack to a King – The Swansea Story. In 2011, Swansea were promoted to the Premier League. On 24 February 2013, Swansea beat Bradford City 5–0 to win the 2012–13 Football League Cup (the competition's highest ever winning margin for the final), winning the first major trophy in the club's history and qualifying for the 2013–14 UEFA Europa League, in which they reached the round of 32 but lost over two legs to Napoli. The club was relegated from the Premier League at the end of the 2017–18 season. History Early years (1912–1945) The, the club. The Swans beat reigning English champions Blackburn Rovers 1–0 in the first round of the 1914–15 FA Cup, Swansea's goal coming from Ben Beynon.[6]. Post-war (1945–1965)–56 season, when a side containing the likes of Ivor Allchurch, Terry Medwin, Harry Griffiths and Tom Kiley led the table early in the season, before an injury to Kiley, referred to as the linch en route to a famous sixth round victory at Anfield. Few gave the Swans, struggling for their lives at the bottom of Division Two, any chance of causing an upset against the league leaders. But the Swans were 0–2 up at half-time thanks to Jimmy McLaughlin and Eddie Thomas. Liverpool turned up the pressure in the second half, pulling a goal back before being awarded a penalty nine minutes from. After flirting with relegation on a few occasions during the previous seasons, the Swans' luck finally ran out a season later in 1965, and they were back in the Third Division. A downward spiral (1965–1977). The 1967–68 season saw the record attendance of 32,796 at the Vetch Field for an FA Cup Fourth Round match against Arsenal. A tragedy struck the club on 20 January 1969 when players Roy Evans and Brian Purcell were killed in a car crash on the way to a game.[7] In 1969, the club name was changed to Swansea City, and Roy Bentley's side celebrated by securing promotion back to the Third Division. A record run of 19 matches unbeaten provided the foundations for a promotion challenge in 1971–72, but an awful run towards the end of the season resulted watched manager. Malcolm Struel also took over as chairman, having previously been on the board, and promised a return to former glories, stating that he would not sell the club's best young talent as previous boards had done. Meteoric rise and equally rapid fall (1977–1986) 28 years old,, Harry Griffiths died of a heart attack on 25 April 1978 before the home game against Scunthorpe United. A further promotion was achieved next four-year rise from basement to top division is a record in English football, held jointly with Wimbledon F.C.[8] Coincidentally, the Swansea decline started the same year as the Wimbledon rise. Swansea also won the Welsh Cup that season, qualifying for Europe for the first time since the 1965–66 season.. Victories over footballing royalty such as Liverpool, Manchester United, Arsenal and Tottenham Hotspur followed as the club topped the league on several further occasions. However, injuries to key players took their toll, and the lack of depth in the squad meant that the season ended in sixth-place finish. Furthermore, a fateful combination of poor form, misfortune in the transfer market and financial problems led to a slump which was as quick and spectacular as had been the rise:.[10] Eight years on from the first promotion under Toshack, the club was back where it had started. In place of strife (1986–1995) 1940s. within. The 1995–96 season ended with relegation back to the third division after eight Mølby. The difficult years return (1995–2001) playoffs, every week on their way to the title. The side conceded just 32 goals during the 1999–2000 season, largely due to the form of excellent centre-back pairing Jason Smith and Matthew Bound, as well as keeper Roger Freestone. During the season the side set a record of nine consecutive league victories, and, during the same period, seven consecutive clean sheets.. However, the following week's 1–1 draw at Rotherham United, which confirmed Swansea as Division Three Champions, was overshadowed by the death of supporter Terry Coles, who was. Last years at Vetch Field and return to League One (2001–2005) In July 2001, following relegation back to Third Division, the club was sold to managing director Mike Lewis for £1. Lewis subsequently sold on his stake to a consortium of Australian businessmen behind the Brisbane Lions (An Australian Rules Football team that is based in Brisbane) football team, fronted by Tony Petty. Seven players were sacked and eight others saw their contracts terminated, angering supporters and sanctions were threatened by the Football League with a rival consortium headed by ex-player Mel Nurse seeking. Jim Moore and Mel Griffin, previously rescuers of Hull City FC, stepped into the breach and persuaded Petty to sell to them (as he had promised to bankrupt the club & make it extinct rather than sell to Nurse). From there Moore became chairman for three weeks giving the "Mel Nurse Consortium" time to organize its finances. Having successfully reorganized the finances of Hull City FC, both Moore & Griffin were believers in clubs belonging in the hands of local people, and so believing Nurses group were best for The Swans, subsequently passed the club onto Nurses consortium for the fee of £1. Despite problems had put the Swans on the bottom of the Football League for the first time in its 91-year history., clinching a 3rd-placed finish with a 1–0 win away to Bury. Their last league game at their old ground was a 1–0 win over Shrewsbury Town, with the last game of any sort being a 2–1 win against Wrexham in the final of the 2005 FAW Premier Cup. Move to Liberty Stadium and return to top flight (2005–2011) The club moved to the new Liberty Stadium during the summer of 2005. The first competitive game was a 1–0 victory against Tranmere Rovers in August 2005. In their first season back in League One, Swansea, after beating Brentford in the semi-finals, lost on penalties to Barnsley in the final at the Millennium Stadium in Cardiff. That same season, Swansea won the Football League Trophy for the first time since 1994, and the FAW Premier Cup for a second successive year. In the following season Jackett resigned as manager in mid-season to be replaced by Roberto Martínez. Martínez's arrival saw an improvement in form, but Swansea missed out on the play-offs again. The following season, an 18-game unbeaten run helped them to the League One title. The club amassed a total of 92 points over the course of the season, the highest ever by a Welsh club in the Football League. Five Swansea players were in the PFA Team of the Year, including the division's 29-goal top scorer Jason Scotland. That same season Swansea lost on penalties to Milton Keynes Dons in the area final of the Football League Trophy. Upon returning to the second tier of English football after 24 years Swansea City finished the 2008–09 season in eighth place, and missed out on the play-offs the following season by a single point. After an impressive 63 wins in 126 games for Swansea City, Martínez left for Wigan Athletic on 15 June 2009 with his tenure returning just 26 losses in that time. He was replaced by Portuguese Paulo Sousa who adopted a more defensive style of play whilst also retaining the slick and effective continental game of "tiki-taka" football that was installed by his immediate predecessor. Sousa subsequently left Swansea to take charge at Leicester City on 5 July 2010, lasting just one year and 13 days in South Wales. However, just before the departure of Sousa, on 15 May 2010, Swansea player Besian Idrizaj tragically suffered a heart attack in his native Austria while on international duty. The club retired the number 40 shirt in his memory, and the players wore shirts dedicated to Idrizaj after their victory in the play-off final. Northern Irishman Brendan Rodgers took charge for the 2010–11 season. He guided the club to a third-placed finish and qualification for the Championship play-offs, with the new manager again keeping the continental style of play introduced by Martínez. After beating Nottingham Forest 3–1 on aggregate in the semi-final they defeated Reading 4–2 in the final at Wembley Stadium, with Scott Sinclair scoring a hat-trick.[11] Premier League and Europe (2011–2018) By being promoted to the Premier League for the 2011–12 season, Swansea became the first Welsh team to play in the division since its formation in 1992.[12] Swansea signed Danny Graham from Watford for a then-record fee of £3.5 million.[13] They defeated Arsenal, Liverpool and Manchester City, the eventual champions, at home during the season.[14] Swansea finished their debut Premier League season in 11th, but at the end of the season Brendan Rodgers left to manage Liverpool.[15] He was replaced by Michael Laudrup for the 2012–13 Premier League season, which was the club's centenary season.[15] Laudrup's first league game ended in a 0–5 victory over Queens Park Rangers away at Loftus Road.[16] Swansea then beat West Ham United 3–0 at the Liberty Stadium, with Michu scoring his third goal in two games.[17] This saw Swansea top of the Premier League; it was the first time since October 1981 the team had been at the summit of the top tier.[17] .svg.png.webp) On 15 October 2012, the club announced a profit of £14.2 million after their first season in the Premier League.[18] On 1 December, Swansea picked up a 0–2 away win against Arsenal, with Michu scoring twice during the last minutes of the game, in Swansea's first win at Arsenal in three decades.[19] Michu ended the season as the club's top scorer in all competitions, with 22 goals.[20] On 24 February 2013, Swansea beat Bradford City 0–5 in the League Cup final en route to the biggest win in the final of the competition.[21][22] This triumph, in a record victory, was Swansea's first major piece of silverware and qualified them for the 2013–14 UEFA Europa League. Swansea finished the season in ninth place in the Premier League, improving upon the league standing achieved in the previous season. On 11 July, Swansea paid a club record transfer fee of £12 million to secure the signing of striker Wilfried Bony from Vitesse Arnhem; Bony was the leading goalscorer in the 2012–13 Eredivisie with 31 goals and was named Dutch Player of the Year.[23] Swansea enjoyed initial success in Europe, beating Spanish side Valencia 3–0 at the Mestalla Stadium in September 2013.[24] On 3 November 2013, Swansea lost the first Welsh derby in the Premier League to Cardiff City following a 1–0 defeat.[25] In February 2014, Laudrup was dismissed from the club after a poor run of form. Defender Garry Monk, a Swansea player since 2004, was named as his replacement.[26] In Monk's first game in charge, Swansea beat Cardiff 3–0 at the Liberty Stadium on 8 February 2014.[27] Despite holding Rafael Benítez's Napoli to a 0–0 draw in the first leg of the Europa League Round of 32, Swansea exited the competition after losing 3–1 in the second leg at the Stadio San Paolo on 27 February 2014.[28] In January 2015, Wilfried Bony was sold to Manchester City for a record sale of £25 million, with add-ons reportedly leading to £28 million.[29] This deal eclipsed the record fee received from Liverpool for Joe Allen at £15 million.[29] At the time of the sale, Bony was the club's top scorer with 34 goals in all competitions, and the Premier League's top scorer for the 2014 calendar year, with 20 goals.[29][30] Swansea City finished eighth in the Premier League at the end of the 2014–15 season with 56 points, their highest position and points haul for a Premier League season, and second highest finish in the top flight of all time.[31] During the season, they produced league doubles over Arsenal and Manchester United, becoming only the third team in Premier League history to achieve that feat.[32] On 9 December 2015, manager Garry Monk was sacked after one win in eleven matches.[33] The club, after a period with Alan Curtis as caretaker manager for the third time, chose the Italian former Udinese Calcio coach Francesco Guidolin. During the 2016–17 preseason, Swansea City came under new ownership by an American consortium led by Jason Levien and Steven Kaplan, who bought a controlling interest in the club in July 2016.[34] Chairman Huw Jenkins remained at the club.[34] On 3 October 2016, Guidolin was sacked and replaced by American coach Bob Bradley. The selection of Bradley marked the first time a Premier League club had ever hired an American manager.[35] Bradley himself was sacked after just 85 days in charge; he won only two of his 11 games, conceded 29 goals, and left with a win percentage of just 18.1%.[36] On 3 January 2017, Bayern Munich assistant manager Paul Clement agreed to take charge of the team, replacing Bradley.[37] Following Clement's arrival, Nigel Gibbs and Claude Makélélé were appointed his assistant coaches and Karl Halabi was appointed Head of Physical Performance.[38] During the remainder of the 2016–17 season, Clement led Swansea to win 26 points from 18 games, securing their survival on 14 May.[39] Only three prior teams had climbed from bottom of the table at Christmas to escape relegation, and only one prior team was able to escape relegation while having three managers during a season.[40] On 6 November 2017, assistant coach Claude Makélélé left the club to join Belgian side Eupen.[41] He was replaced by long-term Swansea player Leon Britton.[42] A poor first half of the 2017–18 season saw Swansea sitting bottom of the table after 18 league games, which led to Clement being sacked on 20 December 2017, leaving the club four points adrift of safety.[43] Towards the end of his tenure, Clement was criticised by a section of Swansea supporters for playing "boring" and "negative" football, questioning his tactical decisions with the Swans being the lowest scorers in the Premier League at the time of his sacking.[44][45][46] He was replaced by Portuguese manager Carlos Carvalhal.[47] Despite consecutive league home wins against Liverpool (1–0),[48] Arsenal (3–1),[49] Burnley (1–0),[50] and West Ham (4–1),[51] Swansea were winless in their last nine league games (losing five) under Carvalhal, leaving them in 18th place on the final day of the season.[52] During the season, chairman Huw Jenkins and the club's American owners were criticised by Swansea fans and pundits for poor transfer windows and the firing of managers;[53] Alan Shearer blamed the Swansea board for moving away from the style of play found under previous managers Brendan Rodgers and Roberto Martínez.[52] Return to the Championship (2018–present) Swansea City were relegated on 13 May 2018, following a 2–1 defeat to already-relegated Stoke City.[52] On 11 June 2018, Graham Potter was announced as the club's new manager, replacing Carvalhal.[54] On 2 February 2019, Huw Jenkins resigned as chairman amid increasing criticism over the club's sale to the American consortium in 2016 and the club's subsequent relegation from the Premier League.[55] He was replaced with Trevor Birch. The first season back in the Championship produced a 10th-place finish, including a quarter-final appearance in the FA Cup. However, Potter left at the end of the season to manage Premier League club Brighton.[56] He was succeeded by former England U17 manager Steve Cooper, with Mike Marsh joining him as his assistant.[57] In September 2019, Cooper was named EFL Championship Manager of The Month, with Swansea City sitting top of the table after an unbeaten first month; this was Swansea's best start to a season in 41 years.[58] On the final day of the season, Swansea beat Reading 4–1 to finish sixth, moving into the play-offs ahead of Nottingham Forest on goal difference,[59] but were later defeated by Brentford in the semi-final second leg.[60] At the end of the 2020/21 season, Swansea finished 4th in the league and secured a play-off place for a second consecutive season.[61] Swansea progressed to the 2021 EFL Championship play-off Final after defeating Barnsley FC 2-1 over aggregate. Swansea faced Brentford FC in the 2021 EFL Championship play-off Final at Wembley Stadium on the 29th May 2021 and was defeated 2-0. Stadium Before Swansea Town was established, children would play football on waste ground in which a plant, called "vetch" (a type of legume) was grown. The site was owned by Swansea Gaslight Company in 1912, but was deemed surplus to requirements at the Gas Company. So Swansea Town moved in when they were established in 1912.[62] City million. On 23 July 2005, The Liberty Stadium was officially opened as Swansea faced Fulham in a friendly game.[63] The Liberty Stadium capacity was 20,532 though has been increased to 20,750. The highest attendance recorded at the stadium came against Arsenal on 31 October 2015 with 20,937 spectators,[64] beating the previous record of 20,845. The stadium has also hosted three Welsh international football matches; the first being a 0–0 draw with Bulgaria in 2006,[65] the second a 2–1 defeat to Georgia in 2008 and a 2–0 win over Switzerland on 7 October 2011. The first international goal to be scored at the Liberty Stadium was a 25-yard effort from Welsh international Jason Koumas.[66] On 1 July 2012, it was widely reported in national media that Swansea City were beginning the planning phase for expanding the Liberty Stadium by approximately 12,000 seats. This plan would be conditional on a successful second season in the Premier League and could cost up to £15 million; the increase would result in a capacity of approximately 32,000 seats.[67] Later that same year, the board of directors announced that planning applications were to be put forward to the council authority, making the Liberty Stadium the largest sportsclub-owned stadium in Wales.[68] Rivalries .jpg.webp) Swansea City's main rivals are Cardiff City, with the rivalry described as among the most hostile in British football.[69] Matches between these two clubs are known as the South Wales derbies and are usually one of the highlights of the season for both sets of supporters. It was only from the late 1960s that the rivalry became marked. Before then fans of the two clubs often had a degree of affection for their Welsh neighbouring team.[70] Swansea City's other rivals are Newport County and to a lesser extent Bristol City and Bristol Rovers. However, Swansea very rarely meet Newport as they are currently separated by two divisions, while the two clubs share a mutual rivalry with Cardiff City. Swansea have won 36 of the 106 competitive meetings, compared to Cardiff's 43, who also have the biggest result between the two sides with Swansea losing 5–0 in 1965, with a further 27 drawn; to this day, neither team has done the double. Following Swansea City's promotion to the Championship, the clubs were drawn in the League Cup which would be the first meeting between both sides for nine years.[71] Swansea City won the tie with a solitary goal from a deflected free-kick taken by Jordi Gómez. The match saw sets of supporters from both clubs clash with police after the match.[72] The next two league games both finished in 2–2 draws.[73][74] However, the derby game at Ninian Park was marred with controversy as referee Mike Dean was struck by a coin from a Cardiff City supporter. In the 2009–10 season, Swansea beat Cardiff 3–2 at the Liberty Stadium in November, before losing 2–1 in Cardiff in April after a late Michael Chopra strike. With Swansea and Cardiff both pushing for promotion to the Premier League, the first derby at the new Cardiff City Stadium, and the first Cardiff win in nine meetings between the sides, was billed as being the biggest South Wales derby of all time, in respect to the league positions of the teams and how close it came to the end of the season. Despite their promising league positions leading up to the derby, neither side gained promotion at the end of that campaign, and so the South Wales derby was once again played out at Championship level during the 2010–11 season – Swansea beating Cardiff 1–0 away with a late winner from then on-loan Marvin Emnes before losing their home game due to a late strike from Craig Bellamy. Following Swansea's promotion to the Premier League at the end of the 2010–11 season, the South Wales derby was again put on hiatus. It would be two seasons before the sides met once more, this time on the worldwide stage of the Premier League. On 3 November 2013, Cardiff took the bragging rights in the first ever Premier League South Wales derby, enjoying a 1–0 win courtesy of ex-Swan Steven Caulker at the Cardiff City Stadium. The return fixture for that season took place on 8 February 2014 at Swansea's Liberty Stadium, a match in which interim player-manager Garry Monk would make his managerial debut following the sacking of Michael Laudrup. The Swans took revenge for the defeat earlier in the season with a convincing 3–0 win. The sides met again during the 2019–20 season in the Sky Bet Championship; Swansea won 1–0 in the first fixture at the Liberty Stadium.[75] Honours Swansea City's first trophy was the Welsh Cup, which they won as Swansea Town in 1913. Their first league honour came in 1925, when they won the 1924–25 Football League Third Division South title. Since then Swansea have gone on to win the League Cup once, the Football League Trophy twice and the Welsh Cup a further nine times. They have also qualified for UEFA Cup Winners' Cup seven times and the UEFA Europa League once. Swansea City's honours include the following:[76] The Football League - English second tier (currently Football League Championship) - Promoted (1): 1980–81 - Play-off winners (1): 2010–11 - English third tier (currently Football League One) - Winners (3): 1924–25, 1948–49, 2007–08 - Promoted (1): 1978–79 - English fourth tier (currently Football League Two) - Winners (1): 1999–2000 - Promoted (3): 1969–70, 1977–78, 2004–05 - Play-off winners (1): 1987–88 - Welsh Football League – Welsh Top Division (Swansea Town/City Reserves) – Record - Winners (12): 1912–13, 1924–25, 1925–26, 1933–34, 1934–35, 1935–36, 1950–51, 1961–62, 1962–63, 1963–64, 1964–65, 1975–76 Domestic Cup Competition - Football League Cup - Winners (1): 2012–13 - Football League Trophy - Winners (2): 1993–94, 2005–06 - Welsh Cup - Winners (10): 1912–13, 1931–32, 1949–50, 1960–61, 1965–66, 1980–81, 1981–82, 1982–83, 1988–89, 1990–91 - FAW Premier Cup - Winners (2): 2004–05, 2005–06 - Kuala Lumpur FA Dunhill Inter-City Tournament - Winners (1): 1984 Statistics and records Wilfred Milne holds the record for Swansea appearances, having played 586 matches between 1920 and 1937, closely followed by Roger Freestone with 563 between 1991 and 2004.[77] The player who has won the most international caps while at the club is Ashley Williams with 50 for Wales. The goalscoring record is held by Ivor Allchurch, with 166 goals, scored between 1947 and 1958 and between 1965 and 1968.[78] Cyril Pearce holds the records for the most goals scored in a season, in 1931–32, with 35 league goals in the Second Division and 40 goals in total.[62] The club's widest victory margin was 12–0, a scoreline which they achieved once in the European Cup Winners Cup, against Sliema in 1982.[62][79] They have lost by an eight-goal margin on two occasions, once in the FA Cup, beaten 0–8 by Liverpool in 1990 and once in the European Cup Winners Cup, beaten 0–8 by AS Monaco in 1991.[80] Swansea's 8–1 win against Notts County in the FA Cup in 2018 is their largest winning margin of the competition, and the largest winning margin at their home ground, the Liberty Stadium.[81] Swansea's home attendance record was set at the fourth-round FA Cup tie against Arsenal on 17 February 1968, with 32,796 fans attending the Vetch Field.[62][82] The club broke their transfer record to re-sign André Ayew from West Ham United in January 2018 for a fee of £18 million.[83] The most expensive sale is Gylfi Sigurðsson who joined Everton in August 2017 for a fee believed to be £45 million.[84][85] Kit manufacturers and sponsors European record - Swansea City's scores are given first in all scorelines. Players in bold have played for the senior team. Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality. Retired numbers Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality. Non-playing personnel Club officials On 22 July 2016, Jason Levien and Steve Kaplan led a consortium of American businessmen who bought a 68% stake in the club.[34] Notable managers There have been forty-four permanent managers (of whom six have been player-managers), and four caretaker managers of Swansea City since the appointment of the club's first professional manager, Walter Whittaker in 1912.[104][105] In the club's first season, Whittaker led Swansea to their first Welsh Cup win.[62] The club's longest-serving manager, in terms of tenure, was Haydn Green, who held the position for eight years, four months and 14 days, spanning the entirety of World War II.[106] Trevor Morris, who oversaw the most games at Swansea, was also the first manager to lead a Welsh club in Europe, qualifying for the 1961–62 Cup Winners' Cup.[62][107] John Toshack, Swansea City's most successful manager with three league promotions and three Welsh Cup wins, led the club to their highest league finish, sixth place in the 1981–82 First Division.[62] Appointed in February 1996, the Dane Jan Mølby became Swansea City's first foreign manager and took Swansea to the 1996–97 Division Three play-off final, only to lose to a last-minute goal.[62][108] In 2011, Swansea City achieved promotion to the Premier League under Brendan Rodgers, becoming the first Welsh team to play in the division since its formation in 1992.[109] During Swansea City's centenary year (2012–13), the club won the League Cup for the first time under Michael Laudrup, the first major trophy in Swansea's 100-year history.[110] References - "Premier League Handbook Season 2016/17" (PDF). Premier League. Retrieved 18 September 2016. - "Ownership statement". Swansea City. Retrieved 25 September 2018. - "Online exhibition: The City of Swansea celebrates its 40th anniversary – City and County of Swansea". Swansea.gov.uk. Archived from the original on 4 April 2012. Retrieved 17 February 2012. - "Ownership statement". 21 October 2010. Retrieved 26 November 2011. - "Swansea City fans a major influence as government encourages role of supporters' trusts". WalesOnline. 19 February 2012. Archived from the original on 16 July 2012. - Jenkins, John M.; et al. (1991). Who's Who of Welsh International Rugby Players. Wrexham: Bridge Books. p. 20. ISBN 978-1-872424-10-1. - "Nigel's WebSpace – English Football Cards, Player death notices". - "The wait ends for Lyon and Hull". fifa.com. Fédération Internationale de Football Association. 28 May 2008. Archived from the original on 12 November 2012. Retrieved 17 February 2012. - Burgum, John (2 May 1981). "Now to take Cup to Europe as well". South Wales Evening Post. Archived from the original on 13 January 2015. Retrieved 13 January 2015. - Moseley, Roy (30 December 1985). "More will follow Swansea down drain". Chicago Tribune. - McCarra, Kevin (30 May 2011). "Swansea reach Premier League thanks to Scott Sinclair hat-trick". The Guardian. London. Retrieved 31 May 2011. - "Swansea City rise from rags to Premier League riches". BBC Sport. 31 May 2011. Retrieved 24 August 2020. - "Swansea complete signing of Danny Graham". The Independent. 7 June 2011. Retrieved 24 August 2020. - "Swansea City Results 2011/12". Sky Sports. Retrieved 24 August 2020. - "Michael Laudrup named new Swansea City manager". BBC Sport. 15 June 2012. Retrieved 24 August 2020. - "QPR 0–5 Swansea". BBC Sport. 18 August 2012. Retrieved 24 August 2020. - "Swansea 3–0 West Ham". BBC Sport. 25 August 2012. Retrieved 24 August 2020. - Cadden, Phil (16 October 2012). "Swansea shine a light on how to profit from the Premier League". The Independent. London. Retrieved 17 October 2012. - Arsenal vs. Swansea City: Final score 0–2 as Michu stuns Gunners – SBNation.com - "Swansea City AFC statistics". Premier League. Retrieved 22 August 2020. - McNulty, Phil (25 February 2013). "Bradford 0–5 Swansea". Wembley: BBC Sport. Retrieved 17 May 2018. - "Swansea City romp to record win". BBC News. 25 February 2013. Retrieved 26 February 2013. - "Bony's the boy for Swans". Swansea City. 11 July 2013. Retrieved 12 July 2013. - "Swansea City humbled 10-man Valencia as the Welsh club began their Europa League group campaign in style". BBC Sport. 19 September 2013. Retrieved 16 May 2017. - "Cardiff City 1–0 Swansea City". BBC Sport. 3 November 2013. - "Swansea sack Michael Laudrup and place Garry Monk in charge". BBC Sport. 4 February 2014. Retrieved 16 February 2014. - "Swansea City 3–0 Cardiff City". BBC Sport. 8 February 2014. - "Napoli 3–1 Swansea". BBC Sport. Retrieved 6 July 2018. - "Wilfried Bony: Man City complete signing of Swansea striker". BBC Sport. Retrieved 6 July 2018. - "Swansea's Wilfried Bony joins Al-Arabi on loan until end of the season". Sky Sports. 31 January 2019. Retrieved 24 August 2020. - "10 things we learned from Swansea City's brilliant record-breaking Premier League season". Wales Online. Retrieved 6 July 2018. - "Arsenal 0–1 Swansea". BBC Sport. 11 May 2015. Retrieved 17 May 2018. - "Swansea City part company with Garry Monk". - "Steve Kaplan and Jason Levien: Meet Swansea City's US Owners". BBC Sport. Retrieved 24 October 2016. - "Swansea sack Francesco Guidolin and appoint Bob Bradley manager". BBC Sport. 3 October 2016. - "Swansea sack manager Bob Bradley after 11 games in charge". - "Paul Clement: Bayern Munich assistant agrees deal to be Swansea City boss". BBC News. 2 January 2017. Retrieved 2 January 2017. - "Homepage – Official Website of the Swans – Swansea City AFC latest news, photos and videos".. - "Swansea City survive in Premier League after Hull lose at Crystal Palace". BBC News. 14 May 2017. - "History gives hope to teams at the bottom". Premier League. 25 December 2016. - "Makelele leaves Swans – Swansea City AFC".. - "Swans name Britton as player-assistant coach". Retrieved 13 November 2017. - "Paul Clement: Swansea sack manager after less than a year in charge". BBC Sport. 20 December 2017. Retrieved 20 December 2017. "Historic league table generator". Retrieved 20 December 2017. - "Swansea fans fume at Paul Clement's tactics after defeat to Watford". HITC. Retrieved 4 November 2017. - "Are Swansea City now just boring to watch? Their problems and the actual evidence examined". Wales Online. 30 October 2017. Retrieved 20 December 2017. - "Paul Clement: I understand fans frustration but I will keep making unpopular substitutions if it means Swansea City pick up points". Wales Online. 31 October 2017. Retrieved 20 December 2017. - "Carvalhal named Swans boss". Swansea City. 28 December 2017. - "Swansea City 1–0 Liverpool". BBC Sport. 22 January 2018. Retrieved 8 February 2018. - "Swansea City 3–1 Arsenal". BBC Sport. 30 January 2018. Retrieved 8 February 2018. - "Swansea City 1–0 Burnley". BBC Sport. 10 February 2018. Retrieved 17 May 2018. - "Swansea City 4–1 West Ham United". BBC Sport. 3 March 2018. Retrieved 17 May 2018. - "Swansea City 1–2 Stoke City". BBC Sport. 13 May 2018. Retrieved 17 May 2018. - "Muddled moves and a woeful window – how Swansea landed back in trouble". The Guardian. 3 November 2017. Retrieved 4 November 2017. - "Graham Potter named new Swansea City manager". BBC Sport. 11 June 2018. Retrieved 11 June 2018. - "Swansea City chairman Huw Jenkins resigns". BBC Sport. 2 February 2019. Retrieved 2 February 2019. - "Graham Potter appointed new Brighton manager after leaving Swansea". 20 May 2019 – via. - "England under-17 coach Steve Cooper named Swansea City boss". BBC Sport. 13 June 2019. Retrieved 20 July 2019. - "Steve Cooper: That was our best performance yet". Swansea City. 15 August 2019. Retrieved 29 September 2019. - Pritchard, Dafydd (22 July 2020). "Reading 1–4 Swansea". BBC Sport. Retrieved 23 July 2020. - Doyle, Paul (29 July 2020). "Brentford v Swansea: Championship play-off semi-final, second leg – as it happened". The Guardian. ISSN 0261-3077. Retrieved 29 July 2020. - "Watford 2–0 Swansea". 8 May 2021. Retrieved 8 May 2021. - "The full history of Swansea City Football Club". swanseacity.com. Swansea City A.F.C. 15 July 2012. Retrieved 14 July 2013. - "Facts and figures of the Liberty". swanseacity.com. Swansea City A.F.C. 1 May 2012. Retrieved 14 July 2013. - "Swansea City football club: Premier League attendances". 11v11.com/. 11v11. 9 March 2016. - "Wales 0–0 Bulgaria". BBC News. 15 August 2006. - "Wales 1–2 Georgia". BBC News. 20 August 2008. - "Swansea City ready to increase Liberty Stadium capacity to 32,000". WalesOnline. 1 July 2012. - "Swansea City plans Liberty Stadium expansion". BBC News Online. 5 December 2012. - "Welsh rivals are upwardly mobile". BBC Sport. 2 April 2009. Retrieved 19 May 2009. - HanesCymru (26 July 2017). "A Supporters' History of the South Wales Derby". 100 Years of Swansea City FC. Retrieved 21 August 2017. - Dulin, David (23 September 2008). "Liberty bounces to Welsh derby". BBC Sport. Retrieved 19 May 2009. - "Fans clash with police at derby". BBC Sport. 24 September 2008. Retrieved 19 May 2009. - "Swansea 2–2 Cardiff". BBC Sport. 30 November 2008. Retrieved 19 May 2009. - "Cardiff 2–2 Swansea". BBC Sport. 5 April 2009. Retrieved 19 May 2009. - "Swansea 1–0 Cardiff". BBC Sport. British Broadcasting Corporation. 27 October 2019. Retrieved 27 October 2019. - "Honours". swanseacity.com. Swansea City A.F.C. 20 May 2013. Retrieved 14 July 2013. - Jones, Colin (2005). Swansea Town/City FC: The First Comprehensive Player A-Y. Parthian Books. ISBN 978-1902638751. - Rollin, Glenda; Rollin, Jack (1999). Rothmans Football Yearbook 1999–2000. Headline Book Publishing. pp. 354–355. ISBN 0-7472-7627-7. - "Swansea City AFC Club Record in UEFA Competitions". uefa.com. UEFA. Retrieved 15 July 2013. - "Swansea Statto.com Records Competitions". statto.com. Archived from the original on 28 May 2013. Retrieved 8 June 2013. - "Swansea City 8–1 Notts County". BBC Sport. 6 February 2018. Retrieved 7 February 2018. - Jones, Colin (2012). Swansea Town & City Football Club: The Complete Record, 1912–2012. From Southern League to Premier League (1st ed.). Dinefwr Press Ltd. p. 245. ISBN 978-1904323-26-6. - "Andre Ayew: Swansea City re-sign Ghana forward from West Ham". BBC Sport. 31 January 2018. Retrieved 7 February 2018. - "Swansea City midfielder Gylfi Sigurdsson has completed a club-record transfer to Everton". Swansea City. 16 August 2017. Retrieved 16 August 2017. - "Gylfi Sigurdsson: Everton sign Swansea midfielder for £45m". BBC Sport. 16 August 2017. Retrieved 20 December 2017. - "YOBET debuts as Swansea City's new front of shirt sponsor". swanseacity.com. 2 July 2019. Retrieved 2 July 2019. - "1961–62 UEFA Cup Winners' Cup Preliminary Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 13 November 2010. Retrieved 15 July 2013. - "1966–67 UEFA Cup Winners' Cup First Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 30 June 2010. Retrieved 15 July 2013. - "1981–82 UEFA Cup Winners' Cup First Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 5 January 2013. Retrieved 15 July 2013. - "1982–83 UEFA Cup Winners' Cup Preliminary Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 17 July 2013. Retrieved 15 July 2013. - "1982–83 UEFA Cup Winners' Cup First Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 5 January 2013. Retrieved 15 July 2013. - "1982–83 UEFA Cup Winners' Cup Second Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 17 July 2013. Retrieved 15 July 2013. - "1983–84 UEFA Cup Winners' Cup Preliminary Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 17 July 2013. Retrieved 15 July 2013. - "1989–90 UEFA Cup Winners' Cup First Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 5 January 2013. Retrieved 15 July 2013. - "1991–92 UEFA Cup Winners' Cup First Round Results". uefa.com. UEFA. 16 January 2009. Archived from the original on 5 January 2013. Retrieved 15 July 2013. - "2013–14 UEFA Europa League third qualifying round results". uefa.com. UEFA. 8 August 2013. Retrieved 22 August 2013. - "2013–14 UEFA Europa League play-off results". uefa.com. UEFA. 22 August 2013. Retrieved 22 August 2013. - "2013–14 UEFA Europa League group stage results". uefa.com. UEFA. 7 September 2013. Retrieved 7 September 2013. - "2013–14 UEFA Europa League Round of 32". uefa.com. UEFA. 20 February 2014. Archived from the original on 13 February 2014. Retrieved 20 February 2014. - "First team". Swansea City A.F.C. Retrieved 23 September 2019. - "Swansea City retire number 40 shirt". Swansea City A.F.C. 17 May 2010. - "Swansea City Contact List". Swansea City. Retrieved 20 July 2019. - "Swansea City: First Team Staff". Swansea City. 20 July 2019. - "Managers List". swanseacity.com. Swansea City A.F.C. 8 August 2012. Retrieved 14 July 2013. - Jones, Colin (2012). Swansea Town & City Football Club: The Complete Record, 1912–2012. From Southern League to Premier League (1st ed.). Dinefwr Press Ltd. pp. 1–8. ISBN 978-1904323-26-6. - Jones, Colin (2012). Swansea Town & City Football Club: The Complete Record, 1912–2012. From Southern League to Premier League (1st ed.). Dinefwr Press Ltd. pp. 109–137. ISBN 978-1904323-26-6. - "Trevor Morris". soccerbase.com. Retrieved 4 February 2013. - "Jan Molby". soccerbase.com. Retrieved 4 February 2013. - "Reading 2–4 Swansea". BBC Sport. 30 May 2011. Retrieved 14 July 2013. - James, Stuart (24 February 2013). "Michael Laudrup acclaims Swansea League Cup win as a career pinnacle". The Guardian. London. Retrieved 14 July 2013. External links - Official website - Swans Academy – Official Swansea City academy site - Swans Commercial – Official Swansea City commercial site Independent sites - Swansea City Supporters' Trust - Planet Swans - Swansea City at the Premier League official website - Forza Swansea - Latest Swansea City News and Video Archived 15 July 2019 at the Wayback Machine - Swans100: 100 Years of Swansea City AFC
https://library.kiwix.org/wikipedia_en_top_maxi/A/Swansea_City_A.F.C.
CC-MAIN-2021-25
refinedweb
6,885
75.4
view raw I am attempting to calculate the MTF from a test target. I calculate the spread function easily enough, but the FFT results do not quite make sense to me. To summarize,the values seem to alternate giving me a reflection of what I would expect. To test, I used a simple square wave and numpy: from numpy import fft data = [] for x in range (0, 20): data.append(0) data[9] = 10 data[10] = 10 data[11] = 10 dataFFT = fft.fft(data) Your pulse is symmetric and positioned in the center of your FFT window (around N/2). Symmetric real data corresponds to only the cosine or "real" components of an FFT result. Note that the cosine function alternates between being -1 and 1 at the center of the FFT window, depending on the frequency bin index (representing cosine periods per FFT width). So the correlation of these FFT basis functions with a positive going pulse will also alternate as long as the pulse in narrower than half the cosine period. If you want the largest FFT coefficients to be mostly positive, try centering your narrow rectangular pulse around time 0 (or circularly, time N), where the cosine function is always 1 for any frequency.
https://codedump.io/share/c0V9WxVWTkXD/1/sign-on-results-of-fft
CC-MAIN-2017-22
refinedweb
207
59.84
#include "global.h" #include <ctype.h> #include <pthread.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include "metaserver2.h" #include "version.h" Go to the source code of this file. Meta-server related functions. Definition in file metaserver.c. LocalMeta2Info basically holds all the non server metaserver2 information that we read from the metaserver2 file. Could just have individual variables, but doing it as a structure is cleaner I think, and might be more flexible if in the future, we want to pass this information to other functions (like plugin or something) This is a linked list of all the metaservers - never really know how many we have. Definition at line 72 of file metaserver.c. References first_player, pl::next, altar_valkyrie::pl, and guildbuy::players. Definition at line 46 of file metaserver.c. References first_player, FLAG_AFK, FLAG_WIZ, pl::hidden, socket_struct::is_bot, obj::map, pl::next, pl::ob, QUERY_FLAG, pl::socket, ST_GET_PARTY_PASSWORD, ST_PLAYING, and pl::state. Referenced by check_shutdown(), and metaserver_update(). This frees any data associated with the MetaServer2 info, including the pointer itself. Caller is responsible for updating pointers (ms->next) - really only used when wanting to free all data. Definition at line 152 of file metaserver.c. References _MetaServer2::hostname. Referenced by metaserver2_init(). This initializes the metaserver2 logic - it reads the metaserver2 file, storing the values away. Note that it may be possible/desirable for the server to re-read the values and restart connections (for example, a new metaserver has been added and you want to start updates to it immediately and not restart the server). Because of that, there is some extra logic (has_init) to try to take that into account. Definition at line 171 of file metaserver.c. References _LocalMeta2Info::archbase, buf, _LocalMeta2Info::codebase, Settings::confdir, Settings::csport, FALSE, _LocalMeta2Info::flags, FREE_AND_CLEAR, free_metaserver2(), _MetaServer2::hostname, _LocalMeta2Info::hostname, _LocalMeta2Info::html_comment, llevError, local_info, LOG(), _LocalMeta2Info::mapbase, MAX_BUF, metaserver2, metaserver2_thread(), metaserver2_updateinfo, ms2_info_mutex, _MetaServer2::next, _LocalMeta2Info::notification, _LocalMeta2Info::portnumber, settings, strcasecmp(), strdup_local, _LocalMeta2Info::text_comment, and TRUE. metserver2_thread is the function called from pthread_create. it is a trivial function - it just sleeps and calls the update function. The sleep time here is really quite arbitrary, but once a minute is probably often enough. A better approach might be to do a time() call and see how long the update takes, and sleep according to that. Definition at line 516 of file metaserver.c. References metaserver2_updates(), and nlohmann::detail::void(). Referenced by metaserver2_init(). This sends an update to the various metaservers. It generates the form, and then sends it to the server Definition at line 465 of file metaserver.c. References llevError, LOG(), metaserver2, metaserver2_writer(), _MetaServer2::next, and altar_valkyrie::res. Referenced by metaserver2_thread(). Handles writing of HTTP request data from the metaserver2. We treat the data as a string. We should really pay attention to the header data, and do something clever if we get 404 codes or the like. Definition at line 351 of file metaserver.c. References navar-midane_time::data, llevError, LOG(), and nlohmann::detail::void(). Referenced by metaserver2_updates(). Updates our info in the metaserver Note that this is used for both metaserver1 and metaserver2 - for metaserver2, it just copies dynamic data into private data structure, doing locking in the process. Definition at line 85 of file metaserver.c. References count_players(), cst_tot, CS_Stats::ibytes, _MetaServer2_UpdateInfo::in_bytes, metaserver2_updateinfo, ms2_info_mutex, _MetaServer2_UpdateInfo::num_players, CS_Stats::obytes, _MetaServer2_UpdateInfo::out_bytes, CS_Stats::time_start, and _MetaServer2_UpdateInfo::uptime. Referenced by block_until_new_connection(), and do_specials(). Non volatile information on the server. Definition at line 140 of file metaserver.c. Referenced by metaserver2_init(). Metaservers to send information to. Definition at line 117 of file metaserver.c. Referenced by metaserver2_init(), and metaserver2_updates(). Statistics on players and such sent to the metaserver2. Definition at line 143 of file metaserver.c. Referenced by metaserver2_init(), and metaserver_update(). Mutex to protect access to metaserver2_updateinfo. Definition at line 44 of file metaserver.c. Referenced by metaserver2_init(), and metaserver_update().
https://crossfire.real-time.com/code/server/trunk/html/metaserver_8c.html
CC-MAIN-2022-33
refinedweb
654
53.47
On Thu, Dec 22, 2011 at 02:26:48AM +0200, Zeeshan Ali (Khattak) wrote: > On Thu, Dec 22, 2011 at 1:46 AM, Zeeshan Ali (Khattak) > <zeeshanak gnome org> wrote: > > On Thu, Dec 22, 2011 at 1:43 AM, Zeeshan Ali (Khattak) > > <zeeshanak gnome org> wrote: > >> From: "Zeeshan Ali (Khattak)" <zeeshanak gnome org> > >> > >> Breaks API and ABI on the fundamental level but lets fix this now while > >> we don't guarantee any API/ABI stability. > > > > Forgot to mention that this patch is on top of Christophe's ACK'ed but > > unmerged 'Add GVirConfigDomainSound' tree. > > And seems my patch went over the limit so it got chopped. You can > find the patch here as well: > For what it's worth, I don't think this patch improves the situation much if we can't express nested namespaces (ie put all the GVirConfigDomain* objects to a GVir::Config::Domain or GVirConfig::Domain namespace). Since it's pretty invasive, I'd lean toward not applying it, but I have no strong opinion either way, I'm fine if it goes in too. Let's see what danpb thinks about it :) Christophe Attachment: pgp7x2UVhrSlf.pgp Description: PGP signature
https://www.redhat.com/archives/libvir-list/2011-December/msg00975.html
CC-MAIN-2015-40
refinedweb
194
63.02
ani (14) Mahesh Chand(10) Shripad Kulkarni(6) R. Seenivasaragavan Ramadurai(5) Shamim Ahmed(4) Dipal Choksi(4) Catalini Tomescu(4) Mike Gold(3) John O Donnell(3) Levent Camlibel(3) Pramod Singh(2) Saurabh Nandu(2) sayginteh (2) Bill Farley(2) Dipal Choksi(2) Imtiaz Alam(2) Jayachandran Ramadurai) G Gnana Arun Ganesh(1) Vijay Cinnakonda(1) Abrar Hussain(1) Kamran (1) Mark.Tutt (1) Chris Harrison(1) Harrison Logic(1) Tushar Ameta(1) Kunle Loto(1) TimothyA Vanover(1) Ashish Jaiman(1) kamalpatel125 (1) Fred Besterwitch(1) Doug Bell(1) Michael Marasco(1) Uchukamen . Get IP Address of a Host Dec 03, 2000. The .Net DNS class can be used to get a host name or an IP of a given host name. To use DNS class in your project, you need to include System.Net Using DataList Control in Web Forms Jan 12, 2001. This example uses a DataList which contains LinkButton controls, which allows a user to navigate through a list of data. SmtpMail and Mail Message : Send Mails in .NET Jan 12, 2001. You can use SmtpMail and MailMessage to send mails in .NET. Web ProxyServer in C# and VB Feb 06, 2001. Web Proxy Server is HTTP proxy server written in C#.It is Multithreaded so many clients can access the web through this WebProxy Server. Horoscope Web Service Feb 08, 2001. An article talks about web services and how to develop them using .NET. Interactive Buttons Feb 26, 2001. By using control properties, you can give your program an interactive look as you see on web sites.. Mail Merge Program Mar 16, 2001. This is a simple mail merge program. This program reads from 3 different text files (by default) and merge all the info to produce mail documents. Visual Studio .NET: Start up Mar 20, 2001. The next version of Visual Studio is Visual Studio .NET.. FTP Server in C# Mar 26, 2001. The application is a simple implementation of FTP Protocol RFC 959.... What's in Mobile Internet Tool? Jul 05, 2001. The New Name For .NET Mobile Web Is Mobile Internet Toolkit.. Sending Mails from Your Mobiles Jul 19, 2001. Sample application shows you how to send mails from your mobile.. Using WebRequest and WebResponse classes Jul 31, 2001. Downloading and uploading data from the web has been a very common programming practice these days. Programming Mobile Forms : Palindrome Aug 06, 2001. In this sample, the author explains how to write the palindrome for mobile FORMS... Call Control in Mobile Internet Toolkit Aug 24, 2001. This only provides you to make you calls easier but also gives a nice look to your program.. WhoIs Sample Code Aug 27, 2001. Sample example shows you how to implement WhoIs..... Live News feed For Mobile Site Sep 05, 2001. Current News are read from a text file, which is being updated regularly after certain interval time.. MailIntranet: A Mailing System for Intranet Sep 10, 2001. This project is about a mailing system within a LAN environment. May it be a corporate office or a lab or any organization for that matter, this is going to be a useful one to communicate between individuals as well as departments... Web/WAP Calender form Harrison Logic Feb 11, 2002. This application provides an updateable web-based calendar that can be viewed from both the Web and through a Mobile device, such as a cellular phone.. Transcations in Web Services - Part 1 Feb 26, 2002. I believe most of the database developers are familiar with transactions. Transactions are one of the common activities a database developer has to deal with. Transactions are also used in Web Services.... Musical Teacher Web Service Mar 04, 2002. A number of people / organizations have developed web services that are available for free to use.. Space Remover Utility Mar 25, 2002. In this article, I want to show you how you can remove white spaces in a web page. . Returning a DataSet From a Web Services: Step-by-Step Apr 01, 2002. This Example Shows how to create a Web service which returns a DataSet object and a Client which displays the DataSet in a Grid. Simple NSLookUp Implementation in C# Apr 01, 2002. This is code implementation for simple nslookup. As you can see from the code listing, I've used classes defined in the System.Net namespace. IP LookUp Program in .NET Apr 03, 2002. This is an IP look up program that uses C# Windows Forms and IPHostEntry to resolve the DNS request. IP Address Hostname Convertor Apr 04, 2002. This is an IP Address-Hostname converter written in C# Windows Forms. About Web-Sites.
http://www.c-sharpcorner.com/tags/Web-Sites
CC-MAIN-2016-30
refinedweb
776
68.26
Agenda See also: IRC log <fsasaki> waiting for people to come <DomJones> Scribe: DomJones Felix: Discussing presentation to HTML WG, Frederick showed example use-case in OKAPI. Large group, many issues, individual feedback is more likely from HTML WG. <fsasaki> Felix: Dave, Dom, Leroy have to leave at 1 which topics do we need them for. Morning agenda (until 13.00) is fine, XLIFF discussion will be held at 3pm. Implementors slot is from 2-3pm. … Starting goToMeeting Naoto and Tadej join us on the goToMeeting. <fsasaki> … We just had a meeting with HTML group, no specific outcome but connection made between the WG. 1st topic is ITS tool discussion. Dave Lewis: Background to this - came from MT confidence score generalised out to other data cats. Solution applies to any data cat. Some cat's will contain confidence, quality, disambig which may be different for every element / span. Most likely that they use one value for tool information. Large overhead for replication. Main example was proposal for MT confidence score. … You may have default done with one tool other certain sections done with other tool. Global rules therefore cannot be used. We need with a seperate data cat or a certain mechanism. Suggestion to use trick used for standoff markup, not a data category, contains tool information referencing part of that tool element aligned to data category. … Allows element referencing across a document. … Could be over-written by element further on in document. Dave Lewis: Yves had proposed some text but in order to take furthur we needed to look at its application to other data cats. I have looked at the ITS tool text and give examples on current proposal for relevent data cats. … have done this for MT confidence and Text A Annotation. If we use this mechanism as a general purpose mechanism which seems to work fairly well. … you end up with data cats which only have a local attribute (such as MT score) this combined with top level references for tool informaiton. … looking at definition of text A annotation you end up with nearly the exact same pattern. … have received comments back from Marcis on MT confidence score. David F: Tools should be made mandatory on more data categories. Loc Qual Score (Precis) should be a candidate for mandatory application of this. The same for Text A Annotation. David Lewis: Have reduced it right down to local selectors, not applicable to global. Felix: In your presentation you state "not define external format" this is not clear in the draft. You just have a URI. Dave Lewis: We're probably a bit too generalised when we talk of having a score for Text A Annotation.This could have been used for Disambig, Terminology, Domain. The way we phrase that allows several different data categories where the score is not different from the process it relates to. … Could have general purpose score attribute. … MT confidence score, disambig, domain, Terminology. Would this need to be more open ended? … feedback from Tadej Tadej: One thing which would be good to have is relation of each instance to a score of the data category it relates to. You can parse up the tree and see which data cats are produced by which tool. Same text by terminology tools and Text A tools at the same time. So could we direct tool-info at every node? Dave Lewis: That is what we were trying to avoid with tool-info with a mechanism for global declaration. Which ITS data category annotations it applies to. <fsasaki> its:toolsRef="MTConfidence| Disambiguation|" … For the element you are applying the declaration to you are saying all of the data categories in that element were generated by a specific tool. Different disambig / tools need to be applied element by element. Worst case scenerio is every element being done by a different tool but we dont think this is a common situation. Felix: The example pasted above, is this what you mean? <fsasaki> also, tadej, is that the functionality you need? Dave Lewis: Yes, gives flexibility for possible declaration of every markup. I was interested to hear the feedback from others as to whether we need different annotations for text annotations, domain and terminology. Marcis: If you dont look-up all instances in a term base, but use extraction method for term-candidates you have the confidence. Further you can fine-tune processes based on the confidence. … allows users to decide precision and recall which allows fine tuning of systems. <tadej> fsasaki: this is expressive enough, but may be verbose for content which was annotated for multiple data categories - it boils down how easy it is to relate every its-ta-confidence instance to the tool it was produced by, where there are many tools in the mix Dave Lewis: Had been starting to think about this for demo systems. Enricher run over text inserts alot of annotation which may well result in false+ … How much do we know about the processes applied to annotations? … thresholds need to be added. tadej: One solution to avoid verbosity is annotation of the tool at the top-level of the document. Produce one annotation on the root applied to all elements below. Dave Lewis: ITS tool essential does that but data category is bound to particular tool. Mark-up addresses that, we're taking that a step forward to text analysis annotation. Is this "at risk"? Felix: No, "at risks" means that feature is clearly defined. To do that we have only three weeks left. Dave Lewis: If we happy with how we operate ITS-tools we need to look at how we insert these data cats for Text A Confidence score and for MT confidence. 1-2-1 matching to data categories. For complicated like disambig are there more than one confidence score depending on entity, lexical mapping, etc. Do we need to be more fine grained in the confidence score there? … There is the overview, questions on wording etc, my feelings are that it seems to work on those data categories and the knock on effect of combining confidence scores into one data cat. … Im looking for people interested to give us feedback now. Im happy to continue to editing these but looking for feedback from Marcis, Tadej, David F, Ankit. Felix: 3 weeks is tight. Lots of test suite work needed in this period. Suggest all those interested in this to look into this today (2nd Nov). We will discuss again on Monday and try to fix it completely so the other timeline is not effected. If something comes up on Monday we have another week but we need feedback by Monday on this. <fsasaki> tadej, it seems we lost you on gotomeeting Dave Lewis: Suggestion has a few typos etc, can people look at that. Example annotation provided, some what editorial but we have some examples as to how it works with different data categories. <tadej> fsasaki: reconnecting - the audio suddenly went silent. felix: This has an impact on test-cases, needs to be in the test suite. <fsasaki> tadej, would that be a mandatory for text analytics? asking also because of test suite etc. David L: We have it as a general mechanism would not make sense with a number of other categories. David F: Unless I know the value / profile of the score it provides nothing. Felix: Are there tools which produce this score out of human annotation. … where scores are provided based on reviewing but a human. <tadej> fsasaki: what exactly are you referring to as mandatory? the confidence score mechanism, or the tool reference mechanism? David F: Score is an orthogonal feature. felix: For MT we have MT-confidence, what other data categories tool would produce that? <fsasaki> tadej, I meant whether the tool mechanism should be mandataory for implementors of text analysis annotation Pedro: Any LSP would produce score for themselves. In scenerios client request quality audit on content we produce or by 3rd party. Important point - before quality audit you set the methodology otherwise the audit is not valid at all. <fsasaki> that is, for implementors of a score for text analytics Pedro: different LSPs have different metrics based on revision, type of errors, severity and generates a score. David F: Without a methodology you cannot produce score. May be better to call is "quality calculation score" etc. Felix: Precis is currently at risk without this methodology. Tadej: Should this be mandatory? I think that without knowing what produced the output it is hard to say anything about the score. Which scores are comparable is hard to identify. Dave L: We were talking about having a url that points to the info, its a url of an element within that process info element. The q: we refer to this process info element without stating what the schema is but we state what the element is. Difference is having a url that points to anything vs. not defining the schema. In XML its fine can point to external or internal element. But in HTML we need to specify how the url references a url in that script.? Pedro: This can be used by a client where a ref is used Score is normally a relative value. You say if the threshold is X and whether the content can be part of the profile ref. Dave L: SMT gives the case where you are indexing the training data to diff MT engines. No way to classify that we understand at the moment. You may end up defining a MT by a description of the MT egine. Pedro: Not many impls as its hard to get that score automatically. Dave L: Self-generating score only used for comparison between the same engines. . I will send Dave L some comments. … raise another point: In Dave's proposal mechanism for Text A Annotation can only be applied to ITS data cats and not non-ITS data cats. Is this something we would like to open up? Dave L: Not sure on that wording, really a scoping thing. Could be applied to meta-tags in HTML but this stretches scope of Impl. Suggest we delete that. Unless others have a specific use-case. Tadej: When not used on ITS elements meaning is undefined. Dave L: Can you email me that and I'll add it to the document. David F: 2mins. I need to fix logistics for XLIFF meeting. Does everyone want to use it or is it a breakout? Felix. Timing it needs to be 3pm. … 3-4pm xliff mapping meeting … may make sense to have everyone here to review action items. Move this to 4pm. … updated agenda … Tadej / Noato will you join us this afternoon? … Tadej no. Felix: Propose we adjourn at 3pm and the XLIFF meeting can follow. <fsasaki> self-introduction of participants <fsasaki> <fsasaki> Felix: HTML session introduced MLW-LT group. Would be good to get feedback on a number of issues. Info share meeting with L10n w3c group. 2 items are relevant for you. Directionality and ruby information. … values are given for directionality and ruby information. … what is here is from the ITS 1.0 spec without changing anything. … Times have changed for directionality there are new attributes, Ruby has a different ruby model than XHTML. So how do we proceed? … We are aiming to make people aware of what is possible for directionality and ruby. Would be great to get your feedback. We refer to what is being done for these 2 data cat in HTML5. … For those using XML based examples the best thing would be for them to use the HTML namespace. However if not possible these elements could be defined in the ITS namespace. … 1 other question: There is no rendering or processing here involved which is hard for testing relating activities. Should we just refer to these other places, would be good to get your feedback. r12a: Do we need to maintain backwards compat with ITS1.0. Felix: Its not straightforward, a break may make sense. Not sure it would break anything in content or applications. If we need to break this backward compatibility we need to discuss this in the group. <fsasaki> (sec 6.5.1) r12a: ITS describes concepts that need to be supported for internationalisation. Key thing: Express the concepts that need to be supported in the markup. One thing you missed at the HTML5 WG on bi-di, which you will not have heard. … We started describing how to use HTML5 for bi-di. bdi element and "auto" value ?? … they isolate certain text for dbi where you have text in HTML and it interferes with stuff around it. Not only are problems with dropping text into HTML but for bdi in general. Direction can be assigned to text but can also isolate that text in plaintext. People are encouraged to use those control codes as opposed to existing methods. … The CSS working group has retrofitted those ideas into the CSS model. Looking for HTML WG to add two extra values to the DIR attribute. Isolation is really important in bdi. Dir = LTR / RTL is to be avoided in replacement of new bdi attributes. … proposed extension to HTML that would be retrofitted into HTML5 during the CR phase (2014). Major shift, all fluid, many questions remain. Felix: Could we point to the HTML5 spec for directionality. r12a: May not yet be in HTML5 by the time ITS2.0 is published. Fantasai: Seems you have some values not already in HTML5. Given this it makes sense to add values here, not worrying about what HTML5 is doing. I dont think its a concern to sync this feature with HTML5. r12a: May be a problem as ITS 2.0 is looking to inform on how bdi is used in HTML5. Jirka: I think its no problem as we are providing mapping from HTML model to our model. So its not too much of a problem to add two new additional values to ITS. … we can just extend our mapping from HTML5 Felix: People involved in XLIFF may have more information. At LocWorld support of bi-direct support in XLIFF was discussed. We are trying to copy the HTML5 model. That may be one area where they may want more than guidance. They are near feature freeze, David can you comment? David F: Bi-direction support was added to draft. Feature freeze informally before christmas / mid-january. Not trying to mimic HTML. In XLIFF 1.2 unicode control chars were being used. No Auto value in current draft, only LTR, RTL on structured or inline elements. … With have (in XLIFF) structural and in-line, not global and local. They are not overlapping. … current draft can be influenced. If this should be changed it could. ITS to XLIFF mapping call today. … important as its a major release, breaks backwards compat, future releases (minor) will not change back-wards. … No ness about attributes it would be about processing requirements. Very little processing req. If you have input for proc req then its the right time to influence the XLIFF group. r12a: Likely to change. We have documentation on bdi. Inline took line with minimum markup. New docs influence the way people write bdi. Every word that changes is surrounded with markup. Its a shift from previous approach. David F: Even things which would be already considered very local in XML are very structured in XLIFF. Felix: Should we continue this now or table for 3pm? One aspect to what Richard said from the beginning. ITS document provides to people the right thing to do, therefore XLIFF people could be directed to this. r12a: isolate and automatically guess / assign directionality are given by bdi. You can start a span of plaintext by LTR or RTL. David F: Is the auto approach a good idea to have in localisation. Norbert: There is an overlap between your work and my work. There is aneed from ITS to work with Reg Exp with JavaScript. Are there other req where you define things that would be interpreted in Javascript. <fsasaki> norbert: reg exp where ITS defines req exps interpreted by Javascript. WE have to improve unicode support in reg exp so your functionality would work. Are there other features of ITS that need support in Javascript. … you may be relying on other features which is not yet supported. … if nothing comes to mind right now we are also looking for future input. Felix: This is the main case where this may be used in Javascript. I dont see it so much in other data cats where this may be applied. <fantasai> RRSAgent: pointer Felix: ITS 2.0 moves to LC at end of Nov. I will send this to you guys for review as to whether you think this is the right way to be phrased. I need to talk to W3C about back-wards compat with Directionality and Ruby. ITN group is busy but a heads-up another call will be coming in Nov. We'll take it from there. r12a: What does MLW-LT think about bdi and ruby / directionality. Jirka: Im worried that the HTML spec was changed recently and this has not been integrated into the spec yet. How to handle more complex cases in Ruby etc. We should use same mark-up on ruby as taken by the HTML but do we have time. r12a: Ruby supoprt in 5.0 HTML, and isolation support. THe problem is that 5.0 wont be finished before your spec if finished. Jirka: As long as ruby is stable in HTML 5.0 but I'm not sure on this. David F: Allowed to use normative references, are they in the right state? felix: We need to develop our testing and be part of our LC draft. This doc provides guidance to do the right thing, rather than having a normative definition. David F: Data cats from 1.2 have moved from 1.0. If the category is now in HTML would the right thing to say its no longer in our scope as its in the HTML WG scope. Felix: We still need to give guidance, albeit non normative Jirka: Currently we try to copy what HTML is doing. What was in ITS 1.0 we used XHTML base elements which were dropped. Would be strange to add ruby in 2.0 to find it was later added to HTML. r12a: Brainstorming… In data cat world generic terms can be described in prose. What currently being done in HTML5 in terms of markup. Enables test in CR based on current HTML5 spec. felix: Ruby tests are rendering based. We currently have no browser / render based impls which means group cannot provide the tests. People who provide normative usage are not in this room. We need to agree upon this in the WG. We cannot get from this group a normative and testable definition. r12a: So this info should be in the spec but non normative. felix: Yes, this also gives us more time. For example Nov 2013. r12a: What would you say in this non-normative text. felix: Currently state nothing but that this will be back-filled in final draft. Provide placeholder for text and move forward after LC. We could then work together to fill in spec. <fsasaki> scribe: fsasaki dom: we have the opportunity to write what is happening next year ... if we provide it non-normatively now david: non-normative means that you don't use the words MUST, SHOULD etc. and you don't need to do tests richard: an application does not need to test things for conformance ... you could not guarentee that XLIFF will have "placeholders" for bidi stuff <DomJones> scribe: DomJones felix: Group based on EC funding and therefore time limited. time extension are not a possibility which gives us a strict timeline on this. … any other thoughts from those here? Fantasai: Asks for clarification, richard said your providing recommendations on how mark-up should be applied to content. felix: these recomendations are created based on inputs from ITN working group. So others can look at how directionality / ruby works. … We're looking at how it works in HTML, guidance, not a normative feature. Not replicating what is done normatively in the HTML spec. Fantasai: Looking at how to take HTML standards for localisation and applies to other pieces of data. Would not suggest using approach taken in HTML. 2 things: XHTML model and current HTML model and not sure how it will look in future models. felix: Its a moving target Fantasai: Should have one attribute for directionality. Not the same as replacing with bits of HTML5. felix: placeholder is a good agreement. Jirka: Good to represent all values in directionality. Felix: Who would test this normative features? Hoping we dont define normatively as there are no test cases. David F: Normative should be tabled for 2.5 or 2.1 ITS. … they are unstable elsewhere so what can we actually do? Fantasai: XML dir attribute with clear semantics. Have all RTL, LTR, etc, all applied to one attribute. As opposed to multiple attributes. Which maps to bdi algorithm using X and Y. Felix: We can create such guidance. Is there someone from the LTN group who would like to help us with this? Fantasai: Aharon Lanin from google would be a good person for this Felix: And if he is not avliable? r12a: Email us and we'll help you with this. felix: Normative and non-normative (guidance) are our options. r12a: From ITS conception we need to specify what information is needed anywhere to support ruby and directionality. Direction, isolate, RTL etc. This was applied in a number of formats, DocBook etc. What I think Im hearing is we could do this generic stuff but to get through CR phase you need to test these things. If you cant map this, you can't test. Felix: We have three weeks. Whether testable or not. Three weeks to stable draft. As soon as its normative deadline is three weeks away. Jirka: Maybe go too deep into functionality. Felix: If it is normative it is not done. You need to assure rendering, impl. Jirka: Displaying, rendering is a problem for styling. Felix: Who here is implementing Directionality and Ruby? Currently there are no testing provided for this. If its normative you need an assertion that it is tested. Jirka: Different case as it was in ITS 1.0, if you drop it you miss backward compatability. Fantasai: Are you defining technology or a spec for others or guidance for others to define technology. Felix: Except for Ruby and Directionality technology. Hence proposing drop these. ... There are features we used to test Ruby and Directionality in 1.0 which use XPATH not used in HTML. Norbert: Why are we even talking about these if they are not being used. r12a: Important for spec but not being implemented. ... I would strongly support it being non-normative rather than not having it there. Issue about stability as opposed to whether it is need it or not. Felix: Would it work if I re-draft current sections, send them to you, with placeholders you can see and whether it makes sense for LC draft? Would that be ok? At the actually LC we have another opportunity to update. <scribe> ACTION: on felix to draft the ruby and directionality sections See [recorded in] <trackbot> Sorry, couldn't find on. You can review and register nicknames at <>. <scribe> ACTION: on fsasaki to draft the ruby and directionality sections See [recorded in] <trackbot> Sorry, couldn't find on. You can review and register nicknames at <>. action on felix to draft the ruby and directionality sections See <trackbot> Sorry, couldn't find on. You can review and register nicknames at <>. <scribe> ACTION: on felix2 to draft the ruby and directionality sections See [recorded in] <trackbot> Sorry, couldn't find on. You can review and register nicknames at <>. <mhellwig> scribe: mhellwig fsasaki reviewing agenda yves: what do we return the lowercase value or the original value? <scribe> ACTION: pablo to talk to Lucy about casing issue [recorded in] <trackbot> Sorry, ambiguous username (more than one match) - pablo <trackbot> Try using a different identifier, such as family name or username (eg. pnietoca, pbada) <scribe> ACTION: paolo to discuss casing issue with Lucy Software [recorded in] <trackbot> Sorry, couldn't find paolo. You can review and register nicknames at <>. <pnietoca> ACTION: pnietoca to discuss casing issue with Lucy Software [recorded in] <trackbot> Created ACTION-273 - Discuss casing issue with Lucy Software [on Pablo Nieto Caride - due 2012-11-09]. fsasaki: domain pointers can have v. long XPATH expressions. Absolute location paths would make it shorter. jirka: write location path for XPATH. Allow absolute and relative location paths. fsasaki agrees jirka: we should rewrite the specification completely. action: fsasaki to edit specification to resolve location path issue Marcis: to analyse for terms, you have to break down the document ... you need to do the analysis several times, for different domains ... unguided term annotation: annotations are made with confidence scores ... a second way is to recognise terms in term base and annotate. there you don't have confidence. either you have the term in your term or you don't ... question arises: how do we add tool info? ... and another question for group: what happens when external rules are not available? fsasaki: we have linked global rules. the conformance section we say that systems must process these rules dave: we haven't defined what happens if it breaks down. we suppose a 'best effort basis' Marcis: translate data category defines what should be analysed. This does not exist for other data categories. ... do we need a definition? dave: there isn't a definition. it hasn't come up. [in case of annotations] you just do it for the whole document. ... no new mechanism is needed. maybe we need a discussion about this Marcis: for translation it's more critical. annotation you add, you don't replace anything dave: you could have false positives and then there's cost for going through and cross these false positives out. it will just raise cost Marcis: does global override local? dave: yes Marcis: terminology is not to be inheritable, but what about this case [discusses example in his notes] fsasaki: we are looking at an example with nested elements around which there is a term annotation around it ... need additional item information for each element. Marcis: would it be the same in disambig? fsasaki: yes, disambig is not inherited fsasaki: [to tadej] would enrycher support nested elements tadej: it's possible, i don't see why it wouldn't be. we're safe here. Marcis: but you cannot do it locally? tadej, fsasaki: yes you can Marcis: what about overlapping annotations fsasaki: won't solve Marcis: also agent and tool information. will not go into detail at this point ... there are a lot of very finegrained usages in agent provenance dave: we now have a standoff mechanism, so does it makes sense to have a tool which says provenance type = ??? ... which would make the metadata definition easier ... so i tihnk it's a good idea to implement a standoff mechanism action: dave to write an email to fsasaki who will integrate this into the spec action: fsasaki to integrate dave's email about standoff mechanism for provenance into spec <fsasaki> Marcis: language will fall back to language "english" as a fallback <fsasaki> .. in MT it is important that you know to which language you are translating <fsasaki> Ankit: difference between language is not ideal <fsasaki> David: not an issue of ITS <fsasaki> Marcis: sure, just a comment <fsasaki> David: any industry implementation does mapping anyway <fsasaki> .. mappings are possible, e.g. to map any English into your English <fsasaki> Marcis: yes, like reading the 1st two characters <fsasaki> David: yes fsasaki: we need to do some event planning fsasaki shows dates and events listed in excel document fsasaki: F2F in January, workshop in Rome [March] pedro: going to Gala. fsasaki: next up F2F in april. tadej, is that good for you? everybody, pleaes check your calender tadej: I checked with hotel, availability end of april and a few times beginning of May ... May better to climb ...?? fsasaki: would 7th and 8th of May [2013] work? agreement from group ... great. let's have the F2F meeting then. ... what about locworld. anybody going? dfilip: we are thinking about FEISGILTT in London as we had good attraction and follow-up in Seattle so London should work well. pedro: will probably go and Lucy Software will also be there (at locworld) ... I think we should submit as much as we can. not just be there dfilip: we may have entry to the main programme through feisgiltt. pedro: most important that we showcases are already running, we have to show them fsasaki: any other events to showcase pedro: I propose an F2F in Madrid in July ... I'll check if the university is available fsasaki: any other events you may present mhellwig: DrupalCon at the end of March fsasaki: [to dave] can you present at XML prague? dave: ??? dave: also, world wide web conference. submission deadline soon. 13th-17th May 2013 ankit: September 2013 I'll go to MT summit user track dfilip: LRC conference. Around 20th of september 2013 fsasaki: time to close. anything else? when is the drupalcon mhellwig: there's two. we'll go to one at least fsasaki: where should the final event be? jirka: when is it supposed to be. Oct, Nov, Dec 2013? fsasaki: yes around there, depending of location availability etc. it's supposed to be our largest workshow. We have a lot of implementations already, so the critical one will be Rome. <fsasaki> <fsasaki> <fsasaki> close action-231 <trackbot> ACTION-231 Create tests for its:param closed <fsasaki> close action-255 <trackbot> ACTION-255 Determine and correct wording for ISSUE-34 closed <fsasaki> close action-258 <trackbot> ACTION-258 Ask XLIFF TC what best practice of mapping ITS into a namespace in XLIFF closed <fsasaki> action-268: see <trackbot> ACTION-268 Make sure that schedule for test suite and schema update discussed at is taken into account notes added <fsasaki> close action-268 <trackbot> ACTION-268 Make sure that schedule for test suite and schema update discussed at is taken into account closed <fsasaki> <fsasaki> ACTION: felix to send info about call time [recorded in] <trackbot> Created ACTION-274 - Send info about call time [on Felix Sasaki - due 2012-11-09]. <fsasaki> action-270: done via <trackbot> ACTION-270 Ask phil and des and arle about need and implementation committment for localization precis during next call notes added <fsasaki> close action-270 <trackbot> ACTION-270 Ask phil and des and arle about need and implementation committment for localization precis during next call closed <fsasaki> action-271: dublicate of action-273 <trackbot> ACTION-271 Add a step regarding the lowercasing of the domain data category notes added <fsasaki> close action-271 <trackbot> ACTION-271 Add a step regarding the lowercasing of the domain data category closed <fsasaki> close issue-52 <trackbot> ISSUE-52 Domain in HTML5 closed <fsasaki> "[Ed. note: Following schema example has to updated once we have final XSD schema for ITS 2.0]" - drop example and note <fsasaki> "[Ed. note: All selector related definitions has to be update to reflect queryLanguage]" - some data category definitions refer to XPath expressions; need to generalize that to refer to "relative or absolute selector" <fsasaki> "[Ed. note: Need to reevaluate above statement related to ODF.]" - remove paragraph above the note, that's it <fsasaki> "The entity type follows inheritance rules." - delete the sentence? came back to Tadej <fsasaki> "[Ed. note: Below note is taken from the quality issue data category. ..." - can be deleted <fsasaki> "[Ed. note: Should locQualityIssues also be defined for global rules? It seems not to be specific to local.]" - not decided yet <fsasaki> yves: having a generic container that is nice <fsasaki> ACTION: yves to summarized "one container name" proposal again [recorded in] <trackbot> Created ACTION-275 - Summarized "one container name" proposal again [on Yves Savourel - due 2012-11-09]. <fsasaki> "[Ed. note: Missing the local mtconfidencescore attribute.]" - to be done after or during tool definition update <dF> Scribe: Milan <dF> Chair: dF Richard and Koji are with us, for bidi and Ruby to discuss <Yves_> see also section on bidid in draft of XLIFF 2.0 Most of implementations are in XLIFF 1.2, version 2.0 is currently under construction Mappings are similar (structurally) Let's start with Directionality (then Ruby) dF: Inline doesn't feature to cover those ... XLIFF proposal for directionality in 2.0 Yves_: Any inline element (including <mrk>) has attribute for directionality <Yves_> See Bidi section here: dF: Masking vs. <mrk> - explaining difference r12a: HTML5 includes bdi attribute provides isolation mechanism ... HTML WG to provide a new value (Auto), decided directionality based on first strong character <dF> <scribe> ACTION: dF to send XLIFF 2.0 spec to Richard [recorded in] <trackbot> Created ACTION-276 - Send XLIFF 2.0 spec to Richard [on David Filip - due 2012-11-09]. dF: There was never mechanicsm like Ruby in XLIFF ... can be provided as a context ... fs can help(?) ... XLIFF is a transport format, not resolved displaying issues. Depends on tools how the content is displayed Continuing the XLIFF Maping Table (r12a and Koji left) Translation Agent Provenance skipped, not Dave Text Analysis Annotation skipped Target Pointer drives an extraction, there is nothing to represent Id Value as a resname in 1.2, no equivalent in 2.0 dF: Yves to propose rename on unit in XLIFF 2.0 ... it doesn't have any sense to have ID value for inlines (remove questionmarks) Preserve Space solved at segment level (xml:space) but not for inline dF: could be used in sub-flow Localization Quality Issue, hold till call with XLIFF committee at Nov 6th Localization Quality Précis dF: We need a mechanism to reference an Agent ... who provided quality check MT Confidence Allowed Characters dF: Do we need it for inline? Yves_: Yes, example might be Login name restriction Storage Size, issue only in 2.0 dF: push harder to have <mrk> extensible ... We stabilized what was possible <scribe> ACTION: dF To color-code cells in Mappings table dependent on unstable ITS categories or in XLIFF [recorded in] <trackbot> Created ACTION-277 - Color-code cells in Mappings table dependent on unstable ITS categories or in XLIFF [on David Filip - due 2012-11-09].
http://www.w3.org/2012/11/02-mlw-lt-minutes.html
CC-MAIN-2016-40
refinedweb
5,770
67.04
For loop is used to iterate over any iterable object, accessing one item at a time and making it available inside the for loop body. For example, if you want to create a drop down of countries in Django template, you can use the below code. {% for country in country_list %} <option name="{{country}}">{{country|title}}</option> {% endfor %} See the demo here: for is an inbuilt tag in Django template and it needs to be closed using endfor tag. To iterate over a dictionary of people's name and their age, just like you would do in Python, use below code. {% for name, age in data.items %} Name: {{name}}, Age: {{age}} <br> {% endfor %} See the demo here: Objects like data and country_list will be passed to the render function while rendering the template. return render(request, 'appname/template_name.html', {"data":data, "country_list":country_list}) Let's say you want to display new messages to logged in user. You fetched all the new messages from the database and stored them in a list and passed to render function along with the template. Now you can either check if the message list is empty or not and then display the message accordingly. Example: {% if messages %} {% for message in messages %} {{ message }}<br> {% endfor %} {% else %} <div>No new message for you</div> {% endif %} Or you can use {% empty %} tag along with {% for %} tag as below. {% for message in messages %} {{ message }} {% empty %} <div>No new message for you</div> {% endfor %} That might be a piece of bad news for you. There is no break statement in Django template For loop. Depending on your requirement you can do one of the following. Option 1 - Iterate over the whole list but do not perform any action if the condition is not matched. For example, you are printing numbers from a list and you need to exit the list as soon as number 99 is encountered. Normally this would be done as below in Python. for number in numbers: if 99 == number: break print(number) But there is no break statement in Django template For loop. You can achieve the same functionality (almost) as below. {% set isBreak = False %} {% for number in numbers %} {% if 99 == number %} {% set isBreak = true %} {% endif %} {% if isBreak %} {# this is a comment. Do nothing. #} {% else %} <div>{{number}}</div> {% endif %} {% endfor %} Option 2 - You can create your own custom template tag. You can iterate over a list in reverse order using below code. {% for member in member_list_score_wise reversed %} {{ member }} <br> {% endfor %} If you want to print the sequence number before the item being printed, you can use forloop.counter variable. {% for member in member_list_score_wise reversed %} {{forloop.counter}}. {{ member }} <br> {% endfor %} 1. John 2. Mac 3. Tony See the demo here: Similarly, you can use the below variables: forloop.counter0 - current index when started with 0 forloop.revcounter - index from last of the loop, started with 1 forloop.revcounter0 - index from last of the loop, started with 0 forloop.parentloop - parent loop index in nested For loops forloop.first - return true if the current item is the first item of list forloop.last - return true if the current item is the last item of the list Sometimes you just need to run a loop N number of times. In such cases item at the current index doesn't matter. In python you would use range function. But again, there is no range tag or function in Django template. You can use either one of the below approaches. Option 1 - Pass the list to render function along with template. render(request, template.html, {"numbers": range(100)}) And in template {% for number in numbers %} {{number}} {% endfor %} Option 2 - You can use the below code to emulate the range function. {% for item in "x"|ljust:"100" %} {{item}} {# or do anything 100 times #} {% endfor %} ljust left-align the values in a field of the given width (100 in this case). So "x"|ljust:"10" will be "x ". So basically you a string of length 10 with 9 spaces in it and 'x' as the first character. Now you are iterating over this string one character at a time.
https://www.pythoncircle.com/post/685/for-loop-in-django-template/
CC-MAIN-2020-40
refinedweb
685
72.66
This patch fixes the last table-related bug I know of: sometimes a cell would overflow to the right of the visible area (even if line wrap is requested). It is a ripe time to remove that silly --enable-nested-tables option. First of all, it looks like the EXP_NESTED_TABLES #ifdefs are put "randomly" in the C code - they have very low correlation with table-in-table code... Second, it is not proper to disable a feature on the basis that it is usually present in the pages *simultaneously* with markup which was giving lynx fits because of absolutely orthogonal reasons. Especially if the fits are not triggered any more. ;-) (Disabling table-in-table stoped lynx from parsing "more complicated" tables - which usually contain not only embedded tables, but other hairy stuff.) Enjoy, Ilya --- ./src/TRSTable.c-pre Thu Apr 19 15:27:20 2001 +++ ./src/TRSTable.c Sun Apr 29 00:18:08 2001 @@ -1931,7 +1931,9 @@ PUBLIC int Stbl_finishTABLE ARGS1( } } #endif - return me->ncols; + /* need to recheck curpos: though it is checked each time a cell + is added, sometimes the result is ignored, as in split_line(). */ + return (curpos > MAX_STBL_POS ? -1 : me->ncols); } PUBLIC short Stbl_getAlignment ARGS1( ; To UNSUBSCRIBE: Send "unsubscribe lynx-dev" to address@hidden
http://lists.gnu.org/archive/html/lynx-dev/2001-04/msg00092.html
CC-MAIN-2013-20
refinedweb
208
61.06
#include <mmsthreadserver.h> This class includes the base functionality e.g. the handshake between server and client threads. You can use the onProcessData() callback if you do not want to derive your own class from MMSThreadServer. Definition at line 46 of file mmsthreadserver.h. constructor Definition at line 38 of file mmsthreadserver.cpp. destructor Definition at line 62 of file mmsthreadserver.cpp. server thread Definition at line 77 of file mmsthreadserver.cpp. Start the server thread. This method starts the server thread. This has to be done before the first trigger() call. Reimplemented from MMSThread. Definition at line 69 of file mmsthreadserver.cpp. Process a new event from the client. Reimplemented in MMSFBBackEndInterface. Definition at line 127 of file mmsthreadserver.cpp. Trigger a new event to the server. This method sends data (in_data) from the caller (client) thread to the server thread. If MMSThreadServer runs in blocking mode (see constructor), the caller of trigger() will be blocked and is waiting for the answer from the server. If MMSThreadServer runs in non-blocking mode, the caller of trigger() will get control immediately and do not wait until server has finished processData() of the previous trigger() call. The handling of out_data and out_data_len is dependend on the implementation of the processData() method which is done in classes derived from MMSThreadServer. If MMSThreadServer runs in non-blocking mode, out_data and out_data_len are not supported. Definition at line 131 of file mmsthreadserver.cpp. id of the server thread Definition at line 49 of file mmsthreadserver.h. request queue (ring buffer) Definition at line 68 of file mmsthreadserver.h. number of items in the queue Definition at line 71 of file mmsthreadserver.h. current item in the queue (read pointer) Definition at line 74 of file mmsthreadserver.h. first free item in the queue (write pointer) Definition at line 77 of file mmsthreadserver.h. mark the ring buffer as full Definition at line 80 of file mmsthreadserver.h. variable for conditional handling (server side) Definition at line 83 of file mmsthreadserver.h. mutex for conditional handling (server side) Definition at line 86 of file mmsthreadserver.h. in non-blocking mode the caller of trigger() will get control directly after triggering and do not wait until server has finished processData() Definition at line 90 of file mmsthreadserver.h. Set one or more callbacks for the onProcessData event. The connected callbacks will be called from MMSThreadServer::processData() and will be run within the context of server thread. A callback method must be defined like this: void myclass::mycallbackmethod(void *in_data, int in_data_len, void **out_data, int *out_data_len); sigc::connection connection; connection = mythreadserver->onProcessData.connect(sigc::mem_fun(myobject,&myclass::mycallbackmethod)); To disconnect your callback do this: connection.disconnect(); Please note: You HAVE TO disconnect myobject from onProcessData BEFORE myobject will be deleted!!! Else an abnormal program termination can occur. You HAVE TO call the disconnect() method of sigc::connection explicitly. The destructor will NOT do this!!! Definition at line 178 of file mmsthreadserver.h.
http://www.diskohq.com/developers/documentation/api-reference/classMMSThreadServer.html
CC-MAIN-2014-42
refinedweb
495
51.34
Host name resolution Updated: January 21, 2005 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 Host name resolution Host name resolution means successfully mapping. You can assign multiple host names to the same host. Windows Sockets (Winsock) programs, such as Internet Explorer and the FTP utility, can use one of two values for the destination to which you want to connect: the IP address or a host name. When the IP address is specified, individual people can assign and use. A domain name is a structured name in a hierarchical namespace called the Domain Name System (DNS). An example of a domain name is. Nicknames are resolved through entries in the Hosts file, which is stored in the systemroot\System32\Drivers\Etc folder. For more information, see TCP/IP database files. Domain names are resolved by sending DNS name queries to a configured DNS server. The DNS server is a computer that stores domain name-to-IP address mapping records or has knowledge of other DNS servers. The DNS server resolves the queried domain name to an IP address and sends the result back. You are required to configure your computers with the IP address of your DNS server in order to resolve domain names. You must configure Active Directory-based computers running Windows XP Professional or Windows Server 2003 operating systems with the IP address of a DNS server. For more information, see DNS defined.
https://technet.microsoft.com/en-us/library/cc739738(d=printer,v=ws.10).aspx
CC-MAIN-2015-22
refinedweb
247
54.12
I’ve been a frequent user of LINQPad for several years now for I enjoy the simplicity with which you can fire it up, start writing code and see the results immediately. I appreciate that other programming languages provide REPLs for you to try and even modify the currently running application, and I enjoy them when I have a chance to work with node.js or Ruby, but this is not something that people writing C# code are used to, and in any case outside the scope of this post. Recently my involvement with NUnit has increased to some extent and on several occasions I found myself needing to try running one-off tests and quickly see their outcome in order to validate bug reports or in general for experimenting. As you can guess this would usually require launching a full-featured IDE like Visual Studio, create a class library project, reference NUnit, build and run some NUnit runner against the resulting assembly. That’s a lot of work for just getting a few lines of code to run. The idea was therefore to be able to run NUnit tests on LINQPad, which is definitely possible and quite simple, as it turns out. Now, a bit of background about NUnit is in order. The current release, which at the time of this writing is 2.6.2 and has been available for a few days, along with the whole 2.x series, has always kept the framework and the core separate. The framework is contained in the nunit.framework.dll assembly, which client code usually references in order to write tests. Just to mention a few it contains the TestAttribute and the Assert classes, which everyone writing unit tests should be familiar with. The core, on the other hand, contains the code needed to run the tests, which includes test discovery, filtering, the logic for handling application domains and processes, and finally exposes this functionality by means of test runners. NUnit currently ships with a console and a graphical GUI runner. This net separation of the core from the framework basically means that running tests in LINQPad via NUnit requires referencing both the core and the framework assemblies, and then invoking a runner pointing it at the current assembly. This is possible but also not the fastest and cleanest way to accomplish it. Enter NUnitLite. NUnitLite has been around for some time now and the main difference with NUnit v2 that is of some relevance here is that there is no distinction between the core and the framework. Everything is contained within a single assembly that you can reference from your tests, and the very same project containing the tests can be used to run them with a single one-liner. Although NUnitLite does not provide all of the features of NUnit it has quite enough of them for most needs, and above all simplifies our life a lot here. On the other side, we’re going to leverage a new feature available in LINQPad, the ability to reference NuGet packages, which right now is provided in the beta version downloadable from here. Now that the ground is set here are the steps to get started writing unit tests in LINQPad: - Create a new query and set its Language to C# Program, which will create a stub main method - Add the NUnitLite NuGet package, which is easily done by hitting F4, then Add NuGet… and then looking up and selecting NUnitLite. Also add the namespaces NUnit.Framework and NUnitLite.Runner - Fill the main method with the single line which will allow to run the tests in the current assembly and finally start writing unit tests. NUnitLite in LINQPad (linqpad-nunitlite.cs) download - Hit F5 and see the results of the test run The steps outlined above are quick and simple, but still require some time if you have to do it over and over, therefore a better option would be to save the raw template as a LINQPad query somewhere on your file system, or even better, although this chance is limited to the owners of a professional license, using a snippet which is available for download from the NUnit wiki here. The snippet takes care of the whole plumbing so you just need to download and copy it to the snippets folder, usually in %userprofile%\Documents\LINQPad Snippets, then just type the snippet shortcut, nunit, in any query window and you’re ready to write your tests.
http://simoneb.github.io/blog/2012/10/28/nunit-via-linqpad/
CC-MAIN-2018-05
refinedweb
748
62.92
Posted 01 Nov 2015 Link to this post Is there any suggestions to batch change finding element logic? For example, there is a logic of finding a element with XamlTag=fldatagrid, but I want to change XamlTag=fldatagrid to XamlTag=fldatagrid_V122. So how can I change this logic in all test? Posted 02 Nov 2015 Link to this post Hi Eddie, There are two ways where you store the element. One, using Elements tab. Second, when element find logic is identified using Telerik Framework. Later one depends on how flexible is our framework. Steps to edit Find Logic when Element is stored in Element Explorer. If element is retrieved using script/coded step, then manually change the places where ever you use this. Tip: If you are trying to retrieve elements using script/coded step, then maintain all the elements find logic in one class file and refer them in your test cases. If you follow this way, like in your scenario, if you just modify the find logic in one place, it reflects the logic in all the corresponding places where ever you refer it. Example: namespace automation { public class element{ String LoginNowlink = "tagname=a,TextContent=Login Now"; } } Now if the link name is changed to Login instead of Login Now, then you just need to replace the below line and save the class String LoginNowlink = "tagname=a,TextContent=Login"; Hope this information helps you Thanks, Sailaja Posted 05 Nov 2015 Link to this post
http://www.telerik.com/forums/how-to-batch-change-finding-element-logic
CC-MAIN-2017-39
refinedweb
246
60.04
img_rotate_ortho() Rotate an image by 90-degree increments Synopsis: #include <img/img.h> int img_rotate_ortho( const img_t *src, img_t *dst, img_fixed_t angle ); Arguments: - src - The image to rotate - dst - The address of an img_t describing the destination. If you don't specify width or height (or both) in the dst then this function will calculate the missing dimension(s) based on the src image, taking into account the rotation. If you do specify either width or height (or both), the image is clipped as necessary; unused data remains untouched. - angle - A 16.16 fixed point representation of the angle (in radians). There are 3 defines provided for convenience: - IMG_ANGLE_90CW — 90 degrees clockwise (to the right) - IMG_ANGLE_180 — 180 degrees - IMG_ANGLE_90CCW — 90 degrees counter-clockwise (to the left) Library: libimg Use the -l img option to qcc to link against this library. Description: This function rotates the src image by 90-degree increments. The rotation is not a true rotation in that the image is not rotated about a fixed point. Rather, the image itself is rotated and the new origin of the image becomes the upper-left corner of the rotated image. The formats of src and dst don't have to be the same; if they are different, the data is converted. A palette-based dst format is only supported if the src data also is palette-based. Rotation cannot be done in place. Returns: - IMG_ERR_OK - Success - IMG_ERR_PARM - Some fields of src are missing (that is, not marked as valid in flags) - IMG_ERR_NOSUPPORT - Unsupported format conversion or angle - IMG_ERR_MEM - Insufficient memory (the function requires a small amount of working memory) Classification: Image library
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.libimg.lib_ref/com.qnx.doc.neutrino.lib_ref/topic/i/img_rotate_ortho.html
CC-MAIN-2020-34
refinedweb
273
52.6
I am trying to make a function like this which will print out the error details associated with it's error number, but i am getting the error error: expected initializer before 'strerror' #include <iostream> #include <cstring> static char* messages[] = { "No error", "EPERM (Operation not permitted)", "ENOENT (No such file or directory)", "ESRCH (No such process)", }; static const int NUM_MESSAGES = sizeof(messages)/sizeof(messages[0]); extern "C" char * __cdecl strerror(int errnum) { if (errnum < NUM_MESSAGES) return messages[errnum]; return "Unknown error"; } int main() { int a; for(a=0;a<5;a++) { std::cout<<a<<" "<<strerror(a)<<"\n"; } return 0; } I just realized that the answer I gave doesn't address the actual question. The key problem here is that when you #include <cstring> you get all the identifiers from the standard C header <string.h>, declared in namespace std. In addition, you might (and probably will) get all those names in the global namespace as well. So when you write your own function named strerror you'll get a direct conflict with the C function strerror, even if you sort out the __cdecl stuff correctly. So to write your own error reporting function, give it a name that's different from any name in the C standard library, and don't bother with extern "C" and __cdecl. Those are specialized tools that you don't need yet. char* error_msg(int erratum) { if (errnum < NUM_MESSAGES) return messages[errnum]; return "Unknown error"; }
https://codedump.io/share/05LTRqvEcl8r/1/expected-initializer-before-39strerror39
CC-MAIN-2016-50
refinedweb
240
52.53
- How GitLab implements GraphQL - Deep Dive - Authentication - Types - Enums - Descriptions - Authorization - Resolvers - Mutations - GitLab’s custom scalars - Testing - Notes about Query flow and GraphQL infrastructure - Documentation and Schema on GitLab’s GraphQL API.9, and while specific details may have changed since then, it should still serve as a good introduction. Authentication Authentication happens through the GraphqlController, right now this uses the same authentication as the Rails application. So the session can be shared. It is also possible to add a private_token to the querystring, or add a HTTP_PRIVATE_TOKEN header. Types We use a code-first schema, and we declare what type everything is in Ruby. For example, app/graphql/types/issue_type.rb: graphql_name 'Issue' field :iid, GraphQL::ID_TYPE, null: false field :title, GraphQL::STRING_TYPE, null: false # (e.g.. Exposing Global IDs When exposing an ID field on a type, we will by default try to expose a global ID by calling to_global_id on the resource being rendered. To override this behaviour, you can implement an id method on the type for which you are exposing an ID. Please make sure that when exposing a GraphQL::ID_TYPE using a custom method that it is globally unique. The records that are exposing a full_path as an ID_TYPE are one of these exceptions. Since the full path is a unique identifier for a Project or Namespace. info., will append an ordering on the primary key, in descending order. This is usually id, so basically we will add order(id: :desc) to the end of the relation. A primary key must be available on the underlying table.: Will act the same as graphql-ruby’s fieldmethod but setting a default description and type and making them non-nullable. These options can still be overridden by adding them as arguments. ability_field: Expose an ability defined in our policies. This takes behaves the same way as permission_fieldand the same arguments can be overridden. abilities: Allows exposing several abilities defined in our policies at once. The fields for these will all have be non-nullable booleans with a default description. will be used for a class property in Ruby that is not an uppercase string, you can provide a value: option that will adapt the uppercase value. In the following example: - GraphQL inputs of OPENEDwill be converted to 'opened'. - Ruby values of 'opened'will be converted to "OPENED"in GraphQL responses. module Types class EpicStateEnum < BaseEnum graphql_name 'EpicState' description 'State of a GitLab epic' value 'OPENED', value: 'opened', description: 'An open Epic' value 'CLOSED', value: 'closed', description: 'An closed Epic' end end Descriptions All fields and arguments must have descriptions. A description of a field or argument is given using the description: keyword. For example: field :id, GraphQL::ID_TYPE, description: 'ID of the resource' Descriptions of fields and arguments are viewable to users through: Description styleguide will be Time, rather than just Date. - No .at end of strings. Example: field :id, GraphQL::ID_TYPE, description: 'ID of the Issue' field :confidential, GraphQL::BOOLEAN_TYPE, description: 'Indicates the issue is confidential' field :closed_at, Types::TimeType, description: 'Timestamp of when the issue was closed' Authorization Authorizations can be applied to both types and fields using the same abilities as in the Rails app. If the: - Currently authenticated user fails the authorization, the authorized resource will be returned as null. - Resource is part of a collection, the collection will be filtered to exclude the objects that the user’s authorization checks failed against. Also see authorizing resources in a mutation. Type authorization Authorize a type by passing an ability to the authorize method. All fields with the same type will. Note: This requires explicitly passing a block to field: module Types class MyType < BaseObject field :project, Types::ProjectType, null: true, resolver: Resolvers::ProjectResolver do authorize [:owner_access, :another_ability] end end end, those arguments will be made available to the fields using the resolver. When exposing a model that had an internal ID ( iid), prefer using that in combination with the namespace path as arguments in a resolver over a database ID. Othewise use a globally unique ID. We already have a FullPathLoader that can be included in other resolvers to quickly find Projects and Namespaces which will have a lot of dependant objects. To limit the amount of queries performed, we can use BatchLoader. Mutations Mutations are used to change any stored values, or to trigger actions. In the same way a GET-request should not modify data, we cannot modify data in a regular GraphQL-query. We can however in a mutation. To find objects for a mutation, arguments need to be specified. As with resolvers, prefer using internal ID or, if needed, a global ID rather than the database. Building Mutations Mutations live in app/graphql/mutations ideally grouped per resources they are mutating, similar to our services. They should inherit Mutations::BaseMutation. The fields defined on the mutation will be returned as the result of the mutation. Always provide a consistent GraphQL-name to the mutation, this name is used to generate the input types and the field the mutation is mounted on. The name should look like <Resource being modified><Mutation class name>, for example the Mutations::MergeRequests::SetWip mutation has GraphQL name MergeRequestSetWip. Arguments required by the mutation can be defined as arguments required for a field. These will be wrapped up in an input type for the mutation. For example, the Mutations::MergeRequests::SetWip with GraphQL-name MergeRequestSetWip defines these arguments: This would automatically generate an input type called MergeRequestSetWipInput with the 3 arguments we specified and the clientMutationId. These arguments are then passed to the resolve method of a mutation as keyword arguments. From here, we can call the service that will modify the resource. The resolve method should then return a hash with the same field names as defined on the mutation and if strings if the mutation failed after authorization errors: merge_request.errors.full_messages } To make the mutation available it should be defined on the mutation type that lives in graphql/types/mutation_types. The mount_mutation helper method will define a field based on the GraphQL-name of the mutation: module Types class MutationType < BaseObject include Gitlab::Graphql::MountMutation graphql_name "Mutation" mount_mutation Mutations::MergeRequests::SetWip end end Will generate will load the object on the mutation. This would allow you to use the authorized_find! helper method. When a user is not allowed to perform the action, or an object is not found, we should raise a Gitlab::Graphql::Errors::ResourceNotAvailable error. Which will be correctly rendered to the clients. GitLab’s: false, description: 'Timestamp of when the issue was created' Testing full stack tests for a graphql query or mutation live in spec/requests/api/graphql. When adding a query, the a working graphql query shared example can be used to test if the query renders valid results. Using the GraphqlHelpers#all_graphql_fields_for-helper, a query including all available fields can be constructed. This makes it easy to add a test rendering all possible fields for a query. To test GraphQL mutation requests, GraphqlHelpers provides 2 helpers: graphql_mutation which takes the name of the mutation, and a hash with the input for the mutation. This will return a struct with a mutation query, and prepared variables. This struct can then be passed to the post_graphql_mutation helper, that will post the request with the correct params, like a GraphQL client would do. To access the response of a mutation, the graphql_mutation_response helper is available. Using these helpers, Notes about Query flow and GraphQL infrastructure GitLab’s GraphQL infrastructure can be found in lib/gitlab/graphql. Instrumentation is functionality that wraps around a query being executed. It is implemented as a module that uses the Instrumentation class. Example: Present module Present #... some code above... def self.use(schema_definition) schema_definition.instrument(:field, Instrumentation.new) docs Documentation and Schema Our schema is located at app/graphql/gitlab_schema.rb. See the schema reference for details. This generated GraphQL documentation needs to be updated when the schema changes. For information on generating GraphQL documentation and schema files, see updating the schema
https://docs.gitlab.com/ee/development/api_graphql_styleguide.html
CC-MAIN-2020-05
refinedweb
1,340
54.93
Basics of REST with Spring In the previous post, we learned to build REST API using Spring Boot. Let’s discuss some of the Basics of REST with Spring. We will be covering following topics. - Shopping Cart Project: We will use Shopizer API for this. - Application Startup, Build and Deployment. - Project Configurations. 1. Shopping Cart Project Let’s build a real REST API in our course for understanding fundamentals of REST API. We will be building a shopping cart REST API with below features. - Ability to get all products based on the store/ site. - Get Product details by product ID. - Add product to Shopping Cart. - Register New User - Update User by user id. - Customer Shopping Cart. We will be using Shopizer to build above REST API. Shopizer is a complete e-commerce platform with ready to use features for build e-commerce web applications. Our goal is to build Spring based REST API on the Shopizer platform. 2. Application Startup, Build and Deployment Our REST API is based on Spring and Spring Boot. Let’s discuss the process of deploying Spring boot REST application.Spring Boot comes with many built-in features which include flexibility in deploying Spring Boot Applications. 2.1 Creating an Executable jar Spring Boot provides a feature for creating self-contained executable jars which can run standalone in production.An executable jar contains all compiled files along with required dependencies to run our code. For creating jar, add spring-boot-maven-plugin to the pom.xml file <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> Run mvn package command to create jar file under the target directory. Type java -jar target/myproject-0.0.1-SNAPSHOT.jar command to run Spring Boot application. 2.2 Embedded Web Server Deployment Tomcat We added spring-boot-starter-web in our application. When we run our application, Spring Boot will detect Spring MVC in the classpath and startup embedded Apache Tomcat server.We can configure and customize embedded tomcat using application.properties file - Enable HTTPS for REST API. - Configure server port - Add HTTPS certificate. Read Spring Boot Web Application Configuration for more detail. 2.3 Embedded Jetty Server If you like to use Jetty as the embedded server, you need to add spring-boot-starter-jetty to your pom.xml.Spring Boot will automatically delegate work to Jetty. Here is the updated POM for Embedded Jetty server. > <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> There are few things which are interesting in above configuration - We excluded spring-boot-starter-tomcat to ensure only one embedded server is available for our application. - Jetty server was added using spring-boot-starter-jetty 2.4 Standalone Application Server What if you like to deploy your REST application on an existing Tomcat instance or to Java EE application servers like Apache Tomcat EE, WildFly or Websphere? You need to make only a few changes to make sure your application is ready for the deployment of these servers. - Change application packaging from jar to war in your pom.xml. - Remove spring-boot-maven-plugin from the pom.xml. - Need to add web entry point (We will be using Servlet 3 Java configuration) To enable our application, we will add ServletInitializer class. On a high level, our class will look like something package com.example.demo; import org.springframework.boot.builder.SpringApplicationBuilder; import org.springframework.boot.web.support.SpringBootServletInitializer; public class CustomWebApplicationInitializer extends SpringBootServletInitializer { protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(applicationClass); } private static Class applicationClass = DemoApplication.class; } 3. Project Setup Please read our previous article Building Restful Web Services to get an understanding of the project structure in Spring Boot. In addition to this structure, we will start adding extra components and libraries to this structure. If you are starting with this post, I highly recommend reading our last post to start with. Summary In this post of Building REST services with Spring, we covered Basics of REST with Spring. We discussed the REST application which we will be creating during this course. We also covered some of the basics of deploying our Spring Boot REST application. In our next article, we will learn how to validate REST API data.
https://www.javadevjournal.com/spring/rest-web-services-bascis/
CC-MAIN-2021-43
refinedweb
716
50.84