text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Ticket #331 (closed defect: fixed)
[PATCH] Comment on docs/toolbox/catwalk.html
Description
I don't understand. Maybe I'm too much of a python/tg noob to figure it out... but your example doesn't work as written.
I inserted it into my sample project's controllers.py file as instructed in the example on the page.
Instead of getting to browse to /catwalk, I am unable to launch the CherryPy? server because it now errors out with "ImportError?: No module named catwalk".
Anyone want to update the docs with a reason why my turbogears doesn't include CatWalk?? -- I.e. ... I didn't explicitly include it in some build parameter (which no one mentioned) or other similar option? ... It's not included in the default package (and where one can download it)? ... It's misspelled in the sample on the page? ... Anything??
Thanks!
Attachments
Change History
comment:2 Changed 13 years ago by bbockelm@…
- Summary changed from Comment on docs/toolbox/catwalk.html to [PATCH] Comment on docs/toolbox/catwalk.html
First of all, what version of the SVN are you using? At some point, catwalk was moved from turbogears.catwalk to the more logical turbogears.toolbox.catwalk. This may be your problem. You'll need the following:
from turbogears.toolbox.catwalk import CatWalk
instead of
from turbogears.catwalk import CatWalk
Make sure that the directory turbogears/toolbox/catwalk exists. If not, you're probably using an older version of turbogears.
Remember - there's a lot of changes between .8 and .9; as .9 isn't even out yet, there's still going to be a lot of upheaval.
Nevertheless, I'll attach a small one-line patch that should fix the docs.
Changed 13 years ago by bbockelm@…
- attachment catwalk-import-doc.diff
added
Change to docs to reflect different import line for catwalk.
It could be because of this (see this thread on mounting Catwalk)
Otherwise, you can always launch Catwalk by entering
at the root directory of your project. | http://trac.turbogears.org/ticket/331 | CC-MAIN-2019-22 | refinedweb | 334 | 61.12 |
The topics in this section discuss the use of PL/SQL in Oracle Reports Builder.
About the Stored PL/SQL Editor
About stored program units
About external PL/SQL libraries
The PL/SQL Editor enables you to create and edit PL/SQL program units.
When you make changes to a program unit, dependent program units lose their compiled status, which is indicated by an asterisk (*) after their name under the Program Units node in the Object Navigator. You can navigate to those program units directly in the PL/SQL Editor using the Name list to recompile them..
Section 4.13.2.3.1, "Editing features in the PL/SQL Editor"
The Stored PL/SQL Editor enables you to create and edit stored PL/SQL program units in a database (listed under the Database Objects node in the Object Navigator).
Section 4.13.3.2, "Creating a stored program unit"
The Syntax Palette is a programming tool that enables you to display and copy the constructs of PL/SQL language elements and built-in packages into the PL/SQL Editor and Stored PL/SQL Editor.
Section 4.13.2.4, "Inserting syntax into the PL/SQL Editor"
Program units are packages, functions, or procedures that you can reference from any PL/SQL within the current report.
Note:
Program units cannot be referenced from other documents. If you want to create a package, function, or procedure that can be referenced from multiple documents, create an external PL/SQL library (see Section 4.13.5.1, "Creating an external PL/SQL library").
For a detailed example of using PL/SQL in a report, see Chapter 40, "Building a Report that Includes PL/SQL"..
Example: Referencing a PL/SQL function in formulas
Suppose that you have a report with the following groups and columns:
Groups Columns Summary ----------------------------------------- RGN REGION RGNSUMSAL SUM(DEPTSUMSAL) COSTOFLIVING DEPT DNAME DEPTNO DEPTSUMSAL SUM(EMP.SAL) JOB JOB HEADCOUNT COUNT(EMP.EMPNO) EMP ENAME EMPNO SAL COMM
Given these groups and columns, you might create multiple formulas that apply the cost of living factor (
COSTOFLIVING) to salaries. To avoid duplication of effort, you could create the following PL/SQL function and reference it from the formulas:
function CompSal(salary number) return number is begin return (salary*CostofLiving); end;
Following are some examples of how you might reference the PL/SQL function in formulas:
CompSal(:RGNSUMSAL) or CompSal(:SAL) + COMM
Section 4.13.3.1, "Creating a local program unit"
Stored program units (also known as stored subprograms, or stored procedures).
Because stored program units run in ORACLE, they can perform database operations more quickly than PL/SQL that is local to your report. Therefore, in general, use stored program units for PL/SQL that performs database operations. Use local program units for PL/SQL that does not involve database operations. However, if you are on a heavily loaded network with very slow response time, using stored program units may not be faster for database operations. Similarly, if your server is significantly faster than your local machine, then using local program units may not be faster for non-database operations.
Section 4.13.3.2, "Creating a stored program unit"
External PL/SQL libraries are collections of PL/SQL procedures, functions, and packages that are independent of a report definition. By attaching an external library to a report, you can reference its contents any number of times. For example, you could reference a procedure in an attached library from both a Before Report trigger and a format trigger. This eliminates the need to re-enter the same PL/SQL for each application.
When you associate an external PL/SQL library with a report or another external library, it is called an attached library.
Section 4.13.5.1, "Creating an external PL/SQL library".
External PL/SQL libraries are independent of a report definition.
Local PL/SQL executes more quickly than a reference to a procedure or function in an external PL/SQL library. As a result, you should only use external PL/SQL libraries when the benefits of sharing the code across many applications outweigh the performance overhead. 4.13.4.3, "Creating or editing a formula column"
Section 4.13.4.4, "Creating a placeholder column"
A group filter determines which records to include in a group. You can use the packaged filters, First and Last, to display the first n or last n records for the group, or you can create your own filters using PL/SQL. You can access group filters from the Object Navigator, the Property Inspector (the PL/SQL Filter property), or the PL/SQL Editor.
The function must return a boolean value (
TRUE or
FALSE). Depending on whether the function returns
TRUE or
FALSE, the current record is included or excluded from the report.
Difference between group filters and Maximum Rows to Fetch property
The Maximum Rows to Fetch property restricts the actual number of records fetched by the query. A group filter determines which records to include or exclude, after all the records have been fetched by the query. Since Maximum Rows to Fetch actually restricts the amount of data retrieved, it is faster than a group filter in most cases. If you.
function filter_comm return boolean is begin if :comm IS NOT NULL then if :comm < 100 then return (FALSE); else return (TRUE); end if; else return (FALSE); -- for rows with NULL commissions end if; end;
Section 4.13.4.2, "Creating or editing a group filter"
A
REF CURSOR query uses PL/SQL to fetch data. Each
REF CURSOR query is associated with a PL/SQL function that returns a cursor value from a cursor variable. The function must ensure that the
REF CURSOR is opened and associated with a
SELECT statement that has a
SELECT list that matches the type of the
REF CURSOR.
Note:
The use of
REF CURSOR queries. statement must be explicitly set for dynamic
REF CURSORs.
For a detailed example, see Chapter 41, "Building a Paper Report with REF CURSORs"..
Furthermore, if you use a stored program unit to implement
REF CURSORs, you receive the added benefits that go along with storing your program units in the Oracle database.
For more information about
REF CURSORs and stored subprograms, refer to the PL/SQL User's Guide and Reference.
Example 1: Package with
REF CURSOR example
/* This package spec defines a REF CURSOR ** type that could be referenced from a ** REF CURSOR query function. ** If creating this spec as a stored ** procedure in a tool such as SQL*Plus, ** you would need to use the CREATE ** PACKAGE command. */ PACKAGE cv IS type comp_rec is RECORD (deptno number, ename varchar(10), compensation number); type comp_cv is REF CURSOR return comp_rec; END;
Example 2: Package with
REF CURSOR and function
/* This package spec and body define a ref ** cursor type as well as a function that ** uses the REF CURSOR to return data. ** The function could be referenced from ** the REF CURSOR query, which would ** greatly simplify the PL/SQL in the ** query itself. If creating this spec ** and body as a stored procedure in a ** tool such as SQL*Plus, you would need ** to use the CREATE PACKAGE and CREATE ** PACKAGE BODY commands. */ PACKAGE cv IS type comp_rec is RECORD (deptno number, ename varchar(10), compensation number); type comp_cv is REF CURSOR; END;
Example 3:
REF CURSOR query
/* This REF CURSOR query function would be coded ** in the query itself. It uses the cv.comp_cv ** REF CURSOR from the cv package to return ** data for the query. */ function DS_3RefCurDS return cv.comp_cv is temp_cv cv.comp_cv; begin if :deptno > 20 then open temp_cv for select deptno, ename, 1.25*(sal+nvl(comm,0)) compensation from emp where deptno = :deptno; else open temp_cv for select deptno, ename, 1.15*(sal+nvl(comm,0)) compensation from emp where deptno = :deptno; end if; return temp_cv; end;
Example 4:
REF CURSOR query calling function
/* This REF CURSOR query function would be coded ** in the query itself. It uses the cv.comp_cv ** REF CURSOR and the cv.emprefc function from ** the cv package to return data for the query. ** Because it uses the function from the cv ** package, the logic for the query resides ** mainly within the package. Query ** administration/maintenance can be ** done at the package level (for example, ** modifying SELECT clauses could be done ** by updating the package). You could also ** easily move the package to the database. ** Note this example assumes you have defined ** a user parameter named deptno. */ function DS_3RefCurDS return cv.comp_cv is temp_cv cv.comp_cv; begin temp_cv := cv.emprefc(:deptno); return temp_cv; end;.
Section 2.6.13.1, "About report triggers"
A built-in package is a group of logically related PL/SQL types, objects, and functions or procedures. It generally consists of two parts: the package spec (including data declarations) and the package body. Packages are especially useful because they allow you to create global variables.
Oracle provides several packaged procedures that you can use when building or debugging your PL/SQL-based applications. Your PL/SQL code can make use of the procedures, functions, and exceptions in the Oracle Reports Builder built-in package (SRW), and numerous Tools built-in packages, as described below.
Oracle Reports Builder is shipped with a built-in package (SRW), a collection of PL/SQL constructs that include many functions, procedures, and exceptions you can reference in any of your libraries or reports.
The PL/SQL provided by the SRW package enables you to perform such actions as change the formatting of fields, run reports from within other reports, create customized messages to display in the event of report error, and execute SQL statements.
You can reference the contents of the SRW package from any of your libraries or reports without having to attach it. However, you cannot reference its contents from within another product, for example, from SQL*Plus.
Constructs found in a package are commonly referred to as "packaged" (that is, packaged functions, packaged procedures, and packaged exceptions).
Topic "SRW built-in package" in the Reference section of the Oracle Reports online Help
Several client-side built-in packages are provided that contain many PL/SQL constructs you can reference while building applications or debugging your application code. These built-in packages are not installed as extensions to the package STANDARD. As a result, any time you reference a construct in one of the packages, you must prefix it with the package name Reports Builder applications.
Provides procedures, functions, and exceptions you can use to create and maintain lists of character strings (VARCHAR2). This provides a means of creating arrays in PL/SQL Version 1.
Provides a foreign function interface for invoking C functions in a dynamic library.
Provides an interface for invoking Java classes from PL/SQL.
Enables you to extract high-level information about your current language environment. This information can be used to inspect attributes of the language, enabling you to customize your applications to use local date and number format. Information about character set collation and the character set in general can also be obtained. Facilities are also provided for retrieving the name of the current language and character set, allowing you to create applications that test for and take advantage of special cases.
Provides procedures, functions, and exceptions you can use for tuning your PL/SQL program units , the DEBUG package) provide additional error information. This information is maintained in the form of an "error stack".
The error stack contains detailed error codes and associated error messages. Errors on the stack are indexed from zero (oldest) to n-1 (newest), where n is the number of errors currently on the stack. Using the services provided by the TOOL_ERR package, you can access and manipulate the error stack.
Provides a means of extracting string resources from a resource file with the goal of making PL/SQL code more portable by isolating all textual data in the resource file.
The following packages are used only internally by Oracle.
Topics for each of the Tools built-in packages under in the Reference > PL/SQL Reference > Built-in Packages section of the Oracle Reports online Help.
Triggers check for an event. When the event occurs they run the PL/SQL code associated with the trigger.
Report triggers are activated in response to report events such as the report opening and closing rather that the data that is contained in the report. They are activated in a predefined order for all reports.
Format triggers are executed before an object is formatted. A format trigger can be used to dynamically change the formatting attributes of the object.
Validation triggers are PL/SQL functions that are executed when parameter values are specified on the command line and when you accept the Runtime Parameter Form.
Database triggers are procedures that are stored in the database and implicitly executed when a triggering statement such as
INSERT,
UPDATE, or
DELETE is issued against the associated.
Runtime Parameter Form appears (if not suppressed).
After Parameter Form trigger is fired (unless the user cancels from the Runtime Parameter Form).
Report is "compiled".
Queries are parsed.
Before Report trigger is fired.
SET TRANSACTION READONLY is executed (if specified with the READONLY command line keyword or setting).
The report is executed and the Between Pages trigger fires for each page except the first one. (Note that data can be fetched at any time while the report is being formatted.) COMMITs can occur during this time due to: SRW.DO_SQL with DDL, or if ONFAILURE=COMMIT, and the report fails.
COMMIT is executed (if READONLY is specified) to end the transaction.
After Report trigger is fired.
COMMIT/ROLLBACK/NOACTION is executed based on what was specified with the ONSUCCESS command line keyword or
As a general rule, any processing that will affect the data retrieved by the report should be performed in the Before Parameter Form or After Parameter Form triggers. (These are the two report triggers that fire before anything is parsed or fetched.) Any processing that will not affect the data retrieved by the report can be performed in the other triggers.
Section 4.13.3.5, "Creating a report trigger"
Section 4.13.3.6, "Deleting a report trigger"
A format trigger is a PL/SQL function executed before an object is formatted. A trigger can be used to dynamically change the formatting attributes of the object. For example, you can use a format trigger to cause a value to display in bold if it is less than zero. Another example is to use a format trigger to use scientific notation for a field if its value is greater than 1,000,000.
A format trigger can fire multiple times for a given object, whenever
See the topic "Format trigger" in the Reference section of the Oracle Reports online Help.
Section 4.13.4.1, "Creating or editing a format trigger"
Validation triggers are PL/SQL functions that are executed when parameter values are specified on the command line and when you accept the Runtime Parameter Form.
Note:
For JSP-based Web reports, the Runtime Parameter Form displays when you run a report in Oracle Reports Builder, but does not display in the runtime environment. If parameters are not specified on the Runtime Parameter Form, the validation trigger returns false and generates error message
rep-546 Invalid Parameter Input error. Thus, you need to provide the parameters in an alternate way, as described in Section 1.9.4, "About Parameter Forms for Web reports".
Validation triggers are also used to validate the Initial Value property of the parameter. Depending on whether the function returns TRUE or FALSE, the user is returned to the Runtime Parameter Form.
See the topic "Validation trigger" in the Reference section of the Oracle Reports online Help.
Section 4.11.4, "Validating a parameter value at runtime"
Database triggers are procedures that are stored in the database and implicitly executed when a triggering statement such as INSERT, UPDATE, or DELETE is issued against the associated table. Triggers can be defined only on tables, not on views. However, triggers on the base table of a view are fired if an INSERT, UPDATE, or DELETE statement is issued against a view.
A trigger can include SQL and PL/SQL statements that execute as a unit, and can invoke other stored procedures. Use triggers only when necessary. Excessive use of triggers can result in cascading or recursive triggers. For example, when a trigger is fired, a SQL statement in the trigger body potentially can fire other triggers.
By using database triggers, you can enforce complex business rules and ensure that all applications behave in a uniform manner. Use the following guidelines when creating triggers:.
Section 4.13.3.7, "Creating a database trigger" | http://docs.oracle.com/cd/E28280_01/bi.1111/b32122/orbr_concepts2006.htm | CC-MAIN-2016-22 | refinedweb | 2,799 | 53.41 |
You can click on the Google or Yahoo buttons to sign-in with these identity providers,
or you just type your identity uri and click on the little login button.
From: kris kvilekval <kris@cs.ucsb.edu>
To: code-quality@python.org
Subject: [code-quality] ImportError: No module named paste
I am using pip to install all dependencies of new packages and am
running into a few issues (see error at end of email).
The following patch seems to remove the issue. Not sure why imports of
names space packages had to be greater
than one, but it seems like pip installs namespaces differently than
easy_install did.
$ hg diff
diff --git a/modutils.py b/modutils.py
--- a/modutils.py
+++ b/modutils.py
@@ -612,11 +612,14 @@
except AttributeError:
checkeggs = False
# pkg_resources support (aka setuptools namespace packages)
- if pkg_resources is not None and modpath[0] in pkg_resources._namespace_packages and len(modpath) > 1:
+# if pkg_resources is not None and modpath[0] in pkg_resources._namespace_packages and len(modpath) > 1:
+ if pkg_resources is not None and modpath[0] in pkg_resources._namespace_packages:#
# setuptools has added into sys.modules a module object with proper
# __path__, get back information from there
module = sys.modules[modpath.pop(0)]
path = module.__path__
+ mtype = PKG_DIRECTORY
+ mp_filename = path[0]
imported = []
while modpath:
modname = modpath[0]
see
Ticket #253517 - latest update on 2014/09/01, created on 2014/06/17 by Sylvain Thenault
add comment
-
2014/06/17 12:18, written by sthenault
see | https://www.logilab.org/ticket/253517 | CC-MAIN-2017-04 | refinedweb | 243 | 56.86 |
TuiNode objects are linked together in a network of nodes to form the structure of a menu which may be navigated using the cursor keys. More...
#include <TuiNode.hpp>
TuiNode objects are linked together in a network of nodes to form the structure of a menu which may be navigated using the cursor keys.
Each node has a single line of text which will be displayed when a node is active. Depending on the sub-class for a particular node, it may be used to edit data of one sort of another.
Definition at line 42 of file TuiNode.hpp.
Set prevNode to the last of the chain of nextNode-s.
Call for the first node of a menu after all others have been added.
This also takes ownership of the child through OObject->setParent.
Definition at line 59 of file TuiNode.hpp.
Updates nodeNumber, ancestorNumbers and prefixText.
Contains the numbers of the parent nodes in the hierarchy.
The last element is the number of the node in the current menu.
Definition at line 84 of file TuiNode.hpp.
Number of the node in the current menu.
Automatically set to 1 if there is no prevNode.
Definition at line 81 of file TuiNode.hpp.
Text of the prefix containing the hierarchical node number.
Definition at line 77 of file TuiNode.hpp. | http://stellarium.org/doc/0.15/classTuiNode.html | CC-MAIN-2019-13 | refinedweb | 221 | 59.5 |
We all have done it. When working on something you find a piece of code that could be extracted into a separate library. Lets call it furry-guide. Furry-guide could also be useful for other developers and as a good open source contributor you upload it to GitHub. You also make sure that it can be used with popular dependency managers.
As you were developing furry-guide in parallel with the main app, you included some of your favorites libraries as dependency to it. Maybe you just wanted to used that awesome logging framework you use everywhere. Or you needed some networking functionality and marked Alamofire version 4.1.0 as dependency for furry-guide.
Overall this should not be an issue. These dependencies are you go to helpers. In every project where you will be using furry-guide, they will be included anyway. So there is no real overhead. It’s just convenient.
By defining these dependencies to furry-guide, you have created a small hurdle for using it. Maybe I’m using different logging library and now furry-guide will pull in extra one not used anywhere else. Maybe I need a different version of Alamofire? Or I’m using URLSession everywhere.
When working on Find movies to watch, I decided to create some components as separate libraries. iTunesSearch for finding movies on iTunes. TheMovieDatabase for getting movies lists. ImageProvide for posters fetching and caching. All these needed two common functionality: logging and network.
For images fetching and caching I have an earlier attempt CDYImagesRetrieve, that declared AFNetworking as dependency. And it had preprocessing macro for enabling logging. Now I wanted to do it better.
Solution that I came up with was to define protocols for these, that app can inject. For example app defines logging protocol:
public protocol Logger { func log<T>(_ object: T, file: String, function: String, line: Int) }
On app side you need to create a logger:
private class ImageLogger: ImageProvide.Logger { fileprivate func log<T>(_ object: T, file: String, function: String, line: Int) { // log the message to app logger } }
And inject. Only if you need it.
ImageProvide.Logging.set(logger: ImageLogger())
This gives the control over logging to application developer. Developer decides if library logging should be enabled and where the output goes.
Same pattern is used for networking. Library defines a protocol:
public protocol NetworkFetch { func fetch(request: URLRequest, completion: @escaping (Data?, URLResponse?, Error?) -> ()) }
On app side you define library requests handler:
import TheMovieDatabase class TMDBFetch: NetworkFetch { private let queue: NetworkQueue init(queue: NetworkQueue) { self.queue = queue } func fetch(request: URLRequest, completion: @escaping (Data?, URLResponse?, Error?) -> ()) { queue.append(request, priority: .high, completion: completion) } }
And inject it into API client:
let fetch = TMDBFetch() let tmdb = TMDB(apiKey: TheMovieDBAPIKey, networkFetch: fetch)
This gives application developer control over how these requests should be made. Additionally it allows developer to prioritize app network calls over library network calls.
Give power to developers 🤘
Tags: ios, swift, dependency, design, library, programming | http://blog.jaanussiim.com/2017/02/07/no-dependencies-library.html | CC-MAIN-2019-13 | refinedweb | 495 | 59.3 |
Red Hat Bugzilla – Bug 72588
lpd startup error
Last modified: 2007-04-18 12:46:00 EDT
Description of Problem: lpd error
Version-Release number of selected component (if applicable):
How Reproducible: Upon first boot and thereafter
Steps to Reproduce:
1. install or upgrade to 'null' public beta
2. upon initial boot and each successive attempt to start lpd
3.
Actual Results:
[root@stone root]# /etc/rc.d/init.d/lpd status
lpd is stopped
[root@stone root]# /etc/rc.d/init.d/lpd start
Starting lpd: Traceback (most recent call last):
File "/usr/sbin/printconf-backend". line 7, in ?
import printconf_backend
File "/usr/share/printconf/util/printconf_backend.py", line 30, in ?
_=gettext.gettext
NameError: name 'gettext' is not defined
No Printers Defined [OK]
This occurs at each boot up and even after attempting to confire a local
printer.
This is already fixed in redhat-config-printer-0.4.22-1.
*** This bug has been marked as a duplicate of 72177 *** | https://bugzilla.redhat.com/show_bug.cgi?id=72588 | CC-MAIN-2017-47 | refinedweb | 162 | 52.05 |
SCDJWS Study Guide: XML Namespace
Printer-friendly version |
Mail this to a friend
Introduction
What is a Namespace?
A namespace is a set of names in which all names are unique. Any logically related set of names where each name should be unique is a namespace. Namespaces make it easier to come up with unique names. Before a new name is added to a namespace, a namespace authority ensures that the new name doesn't already exist in the namespace.
Namespaces themselves must also be given names in order to be useful. Once a namespace has a name, it is possible to refer to its members. For example, Micosoft.RadioService and AOL.RadioService. The names in the namespace should be unique. If this cannot be guaranteed, then the actual namespace names themselves could also be placed into a name space of their own. For example,US.Microsoft.RadioService and CHINA.Microsoft.RadioService. In order to guarantee the uniqueness of namespace names, this pattern can be iterated as many times as necessary.
More than one namespace may appear in a single XML document, to allow a name to be used more than once. Each reference can declare a prefix to be used by each name.
Why use namespace.
Since element names in XML are not fixed, very often a name conflict will occur when two different documents use the same names describing two different types of elements.
This XML document contains information in a HTML table:
<table>
<tr>
<td>Apples</td>
<td>Bananas</td>
</tr>
</table>
This XML document contains information about a table (a piece of furniture):
<table>
<name>African Coffee Table</name>
<width>80</width>
<length>120</length>
</table>
If these two XML documents were added together, there would be an element name conflict because both documents contain a <table> element with different content and definition.. There are more details in the XML Schema section.
XML schema provides a "namespace" mechanism, which uses additional names to distinguish elements with the same name but different meaning in different contexts.
The solution is to prefix a unique string to elements or attributes that need to be distinguished.
This XML document contains information in a HTML table:
<h:table>
<h:tr>
<h:td>Apples</h:td>
<h:td>Bananas</h:td>
</h:tr>
</h:table>
This XML document contains information about a piece of furniture:
<f:table>
<f:name>African Coffee Table</f:name>
<f:width>80</f:width>
<f:length>120</f:length>
</f:table>
Now the element name conflict is gone because the two documents use a different name for their <table> element (<h:table> and <f:table>). By using a prefix, we have created two different types of <table> elements.
The XML namespaces provides a way to distinguish between duplicate element types and attribute names. Duplicates can occur when an XML document contains element types and attributes from more than one different XML schemas (or DTD). | http://xyzws.com/scdjws/SGS12/1 | CC-MAIN-2018-39 | refinedweb | 487 | 53.61 |
Q: Hey Scripting Guy! I need help migrating a script from VBScript to Windows PowerShell 2.0. The VBScript is one that I based upon an old Hey Scripting Guy! article. I use it to search a text file for specific words. If the words are found, the script displays a message stating this fact. I do not want to learn Windows PowerShell right now, but I would like to have a Windows PowerShell script that basically looks like VBScript so I can understand it. Is this possible, or do I need to spend a couple of weeks learning Windows PowerShell first?
-- SA
A: Hello SA, Microsoft Scripting Guy Ed Wilson here, the Scripting Wife and I got back from vacation early Sunday morning, or late Saturday night depending on your perspective. I had spent the week taking a woodworking class at a school in the Smokey Mountains. Spending a week using hand tools to build a piece of fine furniture is relaxing to me. I also got to see my old high school friend, a cousin, nephew and my brother … so it was a great week. While I was in class, the Scripting Wife spent the time running around with relatives, reading and mellowing out by the lake. I spent my days in class working on a dovetailed blanket chest, and my evenings working on a really cool Windows PowerShell module.
SA, I found the VBScript you were talking about in the How Can I Search for Two Items in a Text File Hey Scripting Guy! article from August 1, 2005. The SearchFileForTwoItems.vbs script is seen here.
SearchFileForTwoItems.vbs
Const ForReading = 1 blnFound = False Set objFSO = CreateObject("Scripting.FileSystemObject") Set objFile = objFSO.OpenTextFile("C:\fso\Hsg62910.txt", ForReading) strContents = objFile.ReadAll objFile.Close If InStr(strContents, "Windows 2000") Then blnFound = True End If If InStr(strContents, "Windows XP") Then blnFound = True End If If blnFound Then Wscript.Echo "Either Windows 2000 or Windows XP appears in this file." Else Wscript.Echo "Neither Windows 2000 nor Windows XP appears in this file." End If
The SearchFileForTwoItems.vbs script can be directly translated into Windows PowerShell 2.0. The translated SearchFileForTwoItemsTransFmVBS.ps1 script is seen here.
SearchFileForTwoItemsTransFmVBS.ps1
add-type -AssemblyName microsoft.visualbasic $strings = "microsoft.visualbasic.strings" -as [type] $VbCrLf = "`r`n" $forReading = 1 $blnFound = $false $objFSO = New-Object -ComObject scripting.filesystemobject $objFile = $objFSO.OpenTextFile("C:\fso\hsg62910.txt", $forReading) $strContents = $objfile.ReadAll() $objFile.close() if($strings::instr($strContents, "Windows 2000")) {$blnFound = $true} if($strings::instr($strContents, "Windows XP")) {$blnFound = $true} If($blnFound) {"Either Windows 2000 or Windows XP appears in this file"} Else {"Neither Windows 2000 or Windows XP appears in this file"}
The “secret” to gaining access to the VBScript instr function is to load the Microsoft.VisualBasic .NET Framework assembly. In Windows PowerShell 2.0 this is done by using the Add-Type cmdlet as seen here.
add-type -AssemblyName microsoft.visualbasic
Once the Microsoft.VisualBasic assembly has been added into the current Windows PowerShell session, the members of the Microsoft.VisualBasic.Strings .NET Framework class can be accessed directly. To use the instr static method, you would use syntax that is similar to that seen here.
PS C:\Users\ed.NWTRADERS> [microsoft.visualbasic.strings]::instr("a","abc") 0
The problem with this is that it is an awful lot of typing, therefore, I prefer to create an alias for the particular .NET Framework class I wish to use. To do this, use the –as operator and cast the string as a type. This is shown here.
$strings = "microsoft.visualbasic.strings" -as [type]
By creating an alias for the class, I can use a more intuitive syntax and avoid typing square brackets and the .NET Framework namespace name. The revised syntax is seen here.
PS C:\Users\ed.NWTRADERS> $strings::instr("abc","a")
1
To create a carriage return and a line feed Windows PowerShell uses a backtick r and a backtick n. When placed together they are “`r`n” --a less than intuitive syntax. To make the script easier to read, I create a variable $vbcrlf that will be familiar to VBScript users. This is seen here.
$VbCrLf = "`r`n"
When using the filesystemobject to open a text file, the default method is to open the file for reading, it is a common practice to create a variable or a constant called forreading to hold this instruction. This is seen here.
$forReading = 1
One “improvement” to the script, is to assign a default value of $false to the $blnFound variable. It is always a best practice to initialize counter and control type variables to ensure predictable behavior from the script. This is seen here.
$blnFound = $false
The next thing the VBScript does is create the filesystemobject and store it in the objfso variable. It then opens the text file, reads the entire contents of the file into a variable and closes the file. These are the exact same steps that are performed in the section of the script seen here. To create a new object use the New-Object cmdlet and specify the –comobject parameter. One thing that is nice about the new-object cmdlet is that it knows that you are supplying a string to it for the object name, and therefore you do not need to use quotation marks around the string. The returned filesystemobject is stored in the $objFSO variable –which is the same name that was used in the original VBScript. The openTextFile method is exactly the same as the one used in the VBScript. When using Windows PowerShell, all methods must use parentheses –even if they do not accept any parameters. This section of the script is seen here.
$objFSO = New-Object -ComObject scripting.filesystemobject $objFile = $objFSO.OpenTextFile("C:\fso\hsg62910.txt", $forReading) $strContents = $objfile.ReadAll() $objFile.close()
To determine if a string exists in the text from the file, the VBScript used the instr function. The instr static method from the Microsoft.visualbasic.strings .NET Framework class corresponds to the instr function from VBScript. The instr method returns a number that indicates the position in which the string was located. In the VBScript, the location where the string was found does not matter. What matters is a value that is greater than 0. If a string is not found, instr returns 0. This is seen here.
PS C:\Users\ed.NWTRADERS> $strings::instr("abc","e") 0
Therefore if instr finds the string, the if statement returns true, and therefore the $blnFound variable is assigned the value of $true. This section of the script works the same as the original VBScript and is shown here.
if($strings::instr($strContents, "Windows 2000")) {$blnFound = $true} if($strings::instr($strContents, "Windows XP")) {$blnFound = $true}
If neither string is found, the Else statement kicks in and will display a string that says that the values were not found in the file. This is shown here.
Else {"Neither Windows 2000 or Windows XP appears in this file"}
SA, as promised I have migrated your VBScript to Windows PowerShell 2.0 and maintained the original syntax. I should tell you however, that you can accomplish essentially the same thing by using the Select-String cmdlet. The Select-String cmdlet accepts a path to a file, and it will search the file for a string that you specify. In addition, it will display the sentence where the text string was located. This is shown here.
PS C:\ed> Select-String -Path C:\fso\HSG62910.txt -Pattern "Windows 2000", "Windows XP" C:\fso\HSG62910.txt:2:One of these sentences contains the string Windows 2000. C:\fso\HSG62910.txt:3:The other sentence contains the string Windows XP.
It is possible to completely replicate the functionality of the SearchFileForTwoItems.vbs script in a single line, but it would be hard to read. Therefore, I decided to create a script … note that I used the same code from the SearchFileForTwoItemsTransFmVBS.ps1 script as seen here.
{"Either Windows 2000 or Windows XP appears in this file"} Else {"Neither Windows 2000 or Windows XP appears in this file"}
I was able to use the same code, because Windows PowerShell allows you to treat the output of the Select-String cmdlet as a Boolean value. The complete SearchFileForTwoItems.ps1 script that uses Select-String is seen here.
SearchFileForTwoItems.ps1
If(Select-String -Path C:\fso\HSG62910.txt -Pattern "Windows 2000", "Windows XP") {"Either Windows 2000 or Windows XP appears in this file"} Else {"Neither Windows 2000 or Windows XP appears in this file"}
SA that is all there is to migrating a VBScript to Windows PowerShell. VBScript migration
I like using select string to search all the files in a given directory, and give me the name of the file it's in and the line number:
select-string -pattern "Get-Date" -path *.* | foreach-Object{Write-Host $_.FileName, $_.linenumber} | http://blogs.technet.com/b/heyscriptingguy/archive/2010/06/29/hey-scripting-guy-how-can-i-use-windows-powershell-to-search-a-text-file-for-multiple-strings.aspx | CC-MAIN-2015-40 | refinedweb | 1,478 | 66.23 |
I hope you have read my post on Restful Routing, but if you haven't then there is one thing you need to know. Restful controllers have a subset of the following actions: Index, Show, New, Create, Edit, Update, and Destroy. This series of posts will let you know what the responsibility of each action is and how to implement them in your ASP.NET MVC project. This particular post will focus on Index. This series will assume you have a basic understanding of MVC applications and will use terminology related to to building applications.
What is Index?
Index only should be on controllers that deal with resources and should not normally appear on controllers that deal with a singular resource. What is and isn't a resource depends completely on the view of the user. Here are some examples:
Resources: Blogs, Posts, Comments Resource: Account, Contact
Notice that resources are plural, where as a resource is singular. So what does this have to do with Index?
Many Things
Since Index is located on a resources controller, you probably guessed that its main focus is to display a list of items. Index should do some if not all of the following things:
- Return a set of your resource.
- Allow the user to search the set.
- Allow the user to page the set
- Allow the user to order the set
Index is a GET operation, so it is only meant to retrieve data, and should not be modifying any of the resources directly.
Implementation
So that is the high level explanation of what Index should do but what should an implemented action look like?
public ActionResult Index(int page = 1, int size = 50, string q = null, string o = null) { var model = new IndexViewModel(page, size, q, o); // get the set of items model.Items = Db.Posts // filter / search items / possibly order .Filter(model.Query) // order by .OrderBy(model.Ordering) // page the items .ToPagedList(model.Page, model.Size); return View(model); }
There you have it. Your Index methods should be clean and straight to the point. The nice thing about this approach is your Index method is focused on the set, so right from the start you know that any actions you perform should be read-only set operations.
Conclusion
Keep Index as small and focused as possible. Make sure that it returns a set of items, and allows your users to do set based operations: Ordering, Search, and Paging. If you follow these guidelines you'll fall into the pit of success. As always have fun and keep coding :).
Follow Up
Filipe Lima asked what is the IndexViewModel about? Well, it isn't really anything exciting but I might as well show you. I usually create a ViewModel to hold the query data, along side the results of that query. It looks something like this. Then I pass that ViewModel up the chain to be rendered out.
public class IndexViewModel { public IndexViewModel(int page, int size, string query, string orderBy) { // can have basic logic here to clean data Page = Math.Min(1, page); Size = Math.Max(size,100); Query = query; Ordering = orderBy ?? "Published"; // default the posts so it is never null Posts = new List<Posts>().ToPagedList(1,100); } // IPagedList is a NuGet library PagedList public IPagedList<Post> Posts {get;set;} public string Query {get;set;} public string Ordering {get;set;} public int Page {get;set;} public int Size {get;set;} }
The nice thing about this approach is that your query information is immediately seeded into your view model and you can do basic validation on the data coming in to clean it up; for example, you might not allow pages below 1. You can follow this approach to help keep the code in your controller down. Hope that clears things up and thanks to Filipe Lima for asking :).
Hey Good post! But, I think that you have to go more deep about IndexViewModel. Because is a beginner post, and Index Action depends entirely of It. :)
Ok good point, I'll update the post. with the IndexViewModel probably not going to be anything spectacular though.
Thank you Khalid! I'm Learning a lot here! I'm a beginner in ASP.NET MVC, thats why I wrote that! Keep doing this great job!
Updated the post. Oh no problem, feel free to follow me on Twitter and ask any questions you have.
Nice, everything got more clear to me now! Just two more questions: Is this Filter method from EF or you create It? I was trying to code something here and I cant find this method. How does the Query supposed to be?
@Filipe It is just Pseudo-code (not real), but there are extensions for Linq that allow you to do similar things. Take a look at this ScottGu post, and search NuGet for dynamic Linq. ( not sure which one is the right one).
If you don't want that, you can always create your own extension method called Filter that can add onto the IQueryable that EF gives you. I didn't want to confuse this post with EF code, so I just made some stuff up :) | http://tech.pro/tutorial/1264/restful-aspnet-mvc--index-action | CC-MAIN-2014-35 | refinedweb | 857 | 74.59 |
Posted 04 Mar 2010
Link to this post
xmlns
:telerikGridView="clr-namespace:Telerik.
Posted 09 Mar 2010
Link to this post
I just want to make sure that you have already tried to add our Silverlight controls to the Visual Studio 2008 toolbox by following the approach that is demonstrated in this help topic.
No presence of any
Posted 10 Mar 2010
Link to this post
Several people have reported the same issue but so far we haven't been able to reproduce it on our machines and we do not have a concrete recipe for solving the problem. Sometimes such problems occur due to incorrect installation of Silverlight or Visual Studio. Here is one thread on silverlight.net that has some relevant information. Maybe VS 2005 is to blame but I am just guessing. Hope the information is useful.
In regard to VS 2010, our WPF and Silverlight suite installers will install the controls into the toolbox. Also our assemblies should appear in the AddReference dialog of Visual Studio. In addition to that we have improved the Blend 3 support and our controls will appear in a separate category - for example, "Telerik RadControl for WP | http://www.telerik.com/forums/missing-telerik-icons-in-toolbox | CC-MAIN-2017-09 | refinedweb | 196 | 62.27 |
User data such as setting parameters are usually stored in txt, json or perhaps .csv format. One alternative to the common types of storing simple data sets used for initial settings is through using Excel tables. Excel represents a good way to store and view tables with the extensive formatting options and different tabs for multiple storage. This provides an edge over txt or other simple data storing. The only difficulty might be that it is not easy to retrieve the data easily unlike in .csv or txt file.
The script below will utilize the Excel tables to extract various information such as setting files, parameters values and at the same time maintain a neat way of viewing and changing all the parameters.
The script will required pyExcel which is interface module between python and excel. The concept is to retrieve all the tables and rows specified within the start and closing tag.
The advantages are:
- Multiple tags can be used within the same excel sheet or other excel sheets as well.
- Number of columns can be edited easily.
- Space can be inserted between rows for easy viewing.
- Comment can be inserted so that particular row data can be easily bypassed.
- Normal excel formatting can be enabled without disruption to the data retrieved hence allowing easy viewing of data.
Below is a sample table which data will be extracted to be use in subsequent python functions or modules. Note the formatting is up to the user preferences.
The script run and output are as below. Note that there are various output format to query from. Also notice that the space between the data and those commented rows are being taken care off. The script is available in GitHub.
xls_set_class = XlsExtractor(fname = r'C:\Python27\Lib\site-packages\excel_table_extract\testset.xls', sheetname= 'Sheet1', param_start_key = 'start//', param_end_key = 'end//', header_key = 'header#3//', col_len = 3) xls_set_class.open_excel_and_process_block_data() print xls_set_class.data_label_list ## >>> [u'label1', u'label3'] print xls_set_class.data_value_list ## >>> [[2.0, 3.0], [8.0, 9.0]] print xls_set_class.label_value_dict ## >>> {u'label1': [2.0, 3.0], u'label3': [8.0, 9.0]} print xls_set_class.header_list ## >>> [u'header1', u'header2', u'header3']
Wonder where can I find pyET_tools which is required in pyExcel?
Hi Hwa Seong, you can simply use the pyExcel from the github link below:. In that case, you can just use import pyExcel without the dependency on pyET_tools. Hope that helps.
Kok Hua, sorry but I don’t quite follow you. The code has this:
try:
import pyET_tools.win_program_manipulate as win
except:
print ‘Win function not installed’
print “Some function will be disabled if module is not present”
And it shows up as error since the import fails. So I was hoping I could get the full win function in place.
Hi Hwa Seong, I not really sure this will be an import issue as it is under the try statement, meaning if the import fails, program will continue running. This win function only will close an existing window program and not in use in Excel_table_extract so I do not think it will be a problem. Can you help to screenshot the exact error so I can look at it? Thanks | https://simply-python.com/2014/08/20/manage-and-extract-data-using-python-and-excel-tables/ | CC-MAIN-2019-30 | refinedweb | 527 | 67.04 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
The PEP Manager is a registry for PEP Protocols that follow the generic syntax defined by the HTTP PEP protocol headers. All PEP Protocols are registered at run-time in form of a PEP Module. A PEP Module consists of the following:
Note: The PEP Manager itself consists of BEFORE and an AFTER filter - just like the PEP Modules. This means that any PEP Module also can be registered directly as a BEFORE and AFTER filter by the Net Manager. The reason for having the two layer model is that the PEP Manager maintains a single URL tree for storing PEP information for all PEP Protocols.
A PEP Module has three resources, it can use when creating PEP protocol headers:
This module is implemented by HTPEP.c (get it?), and it is a part of the W3C Sample Code Library.
#ifndef HTPEP_H #define HTPEP_H #include "HTList.h" #include "HTReq.h" #include "HTUTree.h" typedef struct _HTPEPModule HTPEPModule;
A PEP Protocol is registered by registering an PEP Module to in the PEP Manager.
You can add a PEP protocol by using the following method. Each of the callback function must have the type as defined below.
extern HTPEPModule * HTPEP_newModule(const char * protocol, HTNetBefore * before, HTNetAfter * after, HTUTree_gc * gc);
extern HTPEPModule * HTPEP_findModule (const char * protocol);
extern BOOL HTPEP_deleteModule (const char * protocol);
extern BOOL HTPEP_deleteAllModules (void);
The PEP PEP Module is called.
Server applications can have different PEP setups for each hostname and port
number, they control. For example, a server with interfaces
"" and "
internal.foo.com" can have different
protection setups for each interface.
Add a PEP information node to the database. If the entry is already found then it is replaced with the new one. The template must follow normal URI syntax but can include a wildcard Return YES if added (or replaced), else NO
extern HTPEPModule * HTPEP_findModule (const char * name);
As mentioned, the PEP Manager is itself a set of filters that can be registered by the Net manager.
extern HTNetBefore HTPEP_beforeFilter; extern HTNetAfter HTPEP_afterFilter;
#endif /* NOT HTPEP_H */ | http://www.w3.org/Library/src/HTPEP.html | CC-MAIN-2014-42 | refinedweb | 353 | 56.86 |
The intended to be secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source..
There are fundamental differences between the pickle protocols and JSON (JavaScript Object Notation): comments about opcodes used by pickle protocols.
There are currently 4 different protocols which can be used for pickling.:
The highest protocol version available. This value can be passed as a protocol value.
The default protocol used for pickling. May be less than HIGHEST_PROTOCOL. Currently the default protocol is 3, a new protocol designed for Python 3.0.
The pickle module provides the following functions to make the pickling process more convenient:
Write a pickled representation of obj to the open file object file. This is equivalent to Pickler(file, protocol).dump(obj)..
Return the pickled representation of the object as a bytes object, instead of writing it to a file..
If fix_imports is true and protocol is less than 3, pickle will try to map the new Python 3.x names to the old module names used in Python 2.x, so that the pickle data stream is readable with Python 2.x. hierarchy from a bytes object.
The pickle module defines three exceptions:
Common base class for the other pickling exceptions. It inherits Exception.
Error raised when an unpicklable object is encountered by Pickler. It inherits PickleError.
Refer to What can be pickled and unpickled? to learn what kinds of objects can be pickled.:
This takes a binary file for writing a pickle data stream..
Write a pickled representation of obj to the open file object given in the constructor.
Do nothing by default. This exists so a subclass can override it.
If persistent_id() returns None, obj is pickled as usual. Any other value causes Pickler to attribute, and it will instead use the global dispatch table managed by the copyreg module. However, to customize the pickling for a specific pickler object one can set the dispatch_table attribute to a dict-like object. Alternatively, if a subclass of Pickler has a dispatch_table attribute then this will be used as the default dispatch table for instances of that class.
See Dispatch Tables for usage examples.
New in version 3.3.
Deprecated. Enable fast mode if set to a true value. The fast mode disables the usage of memo, therefore speeding the pickling process by not generating superfluous PUT opcodes. It should not be used with self-referential objects, doing otherwise will cause Pickler to recurse infinitely.
Use pickletools.optimize() if you need more compact pickles. representation from the open file object given in the constructor, and return the reconstituted object hierarchy specified therein. Bytes past the pickled object’s representation are ignored.
Raise an UnpicklingError by default.
If defined, persistent_load() should return the object specified by the persistent ID pid. If an invalid persistent ID is encountered, an UnpicklingError should be raised.
See Persistence of External Objects for details and examples of uses.
Import module if necessary and return the object called name from it, where the module and name arguments are str objects. Note, unlike its name suggests, find_class() is also used for finding functions.
Subclasses may override this to gain control over what type of objects and how they can be loaded, potentially reducing security risks. Refer to Restricting Globals for details.. [2]:
In protocol 2 and newer, classes that implements the __getnewargs__() method can dictate the values passed to the __new__() method upon unpickling. This is often needed for classes whose __new__() method requires arguments.. [3]
Although powerful, implementing __reduce__() directly in your classes is error prone. For this reason, class designers should use the high-level interface (i.e., __getnewargs__(), __getstate__() and __setstate__()) whenever possible. We will show, however, cases where using __reduce__() is the only option or leads to more efficient pickling or both. can be provided as their value. The semantics of each item are in order:.
For the benefit of object persistence, the pickle module supports the notion of a reference to an object outside the pickled data stream. Such objects are referenced by a persistent ID, which should be either a string of alphanumeric characters (for protocol 0) .
Recent versions of the pickle protocol (from protocol 2 and upwards) feature efficient binary encodings for several common features and built-in types. Also, the pickle module has a transparent optimizer written in C.
For the simplest code, use the dump() and load() functions.
import pickle # An arbitrary collection of objects supported by pickle. data = { 'a': [1, 2.0, 3, 4+6j], 'b': ("character string", b"byte string"), 'c': set(
Footnotes | http://wingware.com/psupport/python-manual/3.3/library/pickle.html | CC-MAIN-2014-42 | refinedweb | 762 | 59.3 |
Heap and Stack Memory Errors in Java
In this tutorial, we will see the basic errors that can be in a heap or stack memory in Java.
Join the DZone community and get the full member experience.Join For Free
Heap memory - This is a special memory area where java objects are stored.
Stack memory - Temporary memory area to store variables when method invoked.
The main exception that describes the problem with the heap memory in
java.lang.OutOfMemoryError.
Java Heap Space
Java program is failed to allocate a new object in the heap memory area.
GC Overhead Limit Exceeded
Java program is spending too much time on garbage collection. Thrown in case garbage collection takes 98% of program time and recover less than 2% of a memory space
public class OutOfMemoryErrorDemo { public static void main(String[] args) { int i = 0; List<String> stringList = new ArrayList<>(); while (i < Integer.MAX_VALUE) { i++; String generatedString = new String( "Some string generated to show out of memory error example " + i); stringList.add(generatedString); } } }
stringList contains a link to our generated strings, that's why GC can't delete our generated strings from memory and GC tries to delete any other garbage in the application, but that's not enough.
Requested Array Size Exceeds VM Limit
Tries to allocate an array when a heap space is not enough.
public class OutOfMemoryErrorDemo { public static void main(String[] args) { // we try to create too long array long[] array = new long[Integer.MAX_VALUE]; } }
Metaspace
The exception is thrown with that message when there is no space in the metaspace area for class data information.
Request Size Bytes for Reason. Out of Swap Space?
Thrown when failed to allocate memory in the native heap or native heap close to exhaustion.
reason stack_trace_with_native_method
Java Native Interface or native method failed to allocate memory in heap.
StackOverFlowError - when there are too many method calls. Usually, the exception is thrown by the method that has recursion inside.
public class StackOverFlowErrorDemo { public static void main(String[] args) { recursiveMethod(2); } public static int recursiveMethod(int i) { // it will never stop and it allocates all stack memory return recursiveMethod(i); } }
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/heap-and-stack-memory-errors-in-java | CC-MAIN-2021-43 | refinedweb | 363 | 53.92 |
BSD.PORT.MK(5) BSD Reference Manual BSD.PORT.MK(5)
bsd.port.mk - ports tree master Makefile fragment
.include <bsd.port.mk>
bsd.port.mk holds all the standard routines used by the ports tree. Some variables and targets are for its internal use only. The rest is docu- mented here. Other BSD variants, as well as older versions of bsd.port.mk, include other targets and variables. Conversion methods are outlined here. This is not quite complete; a few variables and targets are not yet docu- mented. Mostly because undocumented stuff has fuzzy semantics, and it hasn't been decided yet how to define it.
{build,run,all}-dir-depends Print all dependencies for a port in order to build it, run it, or both. The output is formatted as package specifica- tion pairs, in a form suitable for tsort(1). full-{build,run,all}-depends Print all dependencies a package depends upon for building, running, or both, as a list of package names. {build,lib,run}-depends-list Print a list of first level package specifications a port depends as build dependencies, library dependencies, or run dependencies. print-{build,run}-depends User convenience target that displays the result of full-{build,run}-depends in a more readable way. {pre,do,post}-* Most standard targets can be specialized according to a given port's needs. If defined, the pre-* hook will be in- voked before running the normal action; the do-* hook will be invoked instead of the normal action; the post-* hook will be invoked after the normal action. Specialization hooks exist for build, configure, distpatch, extract, fake, fetch, install, package, patch, regress. See individual targets for exceptions. addsum Complete the ${CHECKSUM_FILE} record of checksums with files that have been added since makesum. Complain if any- thing does not match. build, all Default target. Build the port. Essentially invoke env -i ${MAKE_ENV} ${MAKE_PROGRAM} ${MAKE_FLAGS} \ -f ${MAKE_FILE} ${ALL_TARGET} build-depends Verify the ports mentioned in BUILD_DEPENDS, by checking the corresponding packages are actually installed, and in- stall the missing ports by recursing through the ports tree. Invoked right after creating the working directory. checkpatch Debugging version of the patch target that simulates invok- ing patch(1). checksum Check distribution archives and distribution patches con- trol sum against the results recorded in ${CHECKSUM_FILE}, using the cryptographic signature utilities defined. Nor- mally ${ALLFILES} are checksummed, unless IGNOREFILES is set to a list of files to ignore. Invoking checksum with REFETCH=true will try to fetch a version with the correct checksum from the OpenBSD main archive site in the case of a checksum mismatch. NO_CHECKSUM can be used to avoid all checksumming steps. clean Clean ports contents. By default, it will clean the work directory. It can be invoked as mmake clean='[depends bulk work fake flavours dist install sub package packages]'. work Clean work directory. bulk Clean bulk cookie. depends Recurse into dependencies. flavours Clean all work directories. dist Clean distribution files. install Uninstall package. package Remove package file (and copies in ${CDROM_PACKAGES} and ${FTP_PACKAGES}). readmes Clean files generated through the readme targets (html files). sub With install or package, clean subpackages as well. packages Short-hand for 'sub package'. clean-depends Short hand for mmake clean=depends. configure Configure the port. Might be a void operation. Unless over- ridden, configure creates the ${WRKBUILD} directory, runs ${SCRIPTDIR}/configure if it exists, and runs whatever con- figuration methods are recorded in CONFIGURE_STYLE. depends Check all the port's dependencies, that is: build-depends, lib-depends, run-depends, regress-depends. describe Prints a one-line index entry of the port, suitable for ${PORTSDIR}/INDEX. distclean Short-hand for mmake clean=dist. distpatch Apply distribution patches only. See patch and PATCH_CASES for details. extract Extract the distribution files under ${WRKDIR} (but see EXTRACT_ONLY). Refer to EXTRACT_CASES for a complete description. Do not use pre-extract and do-extract hooks. fake Do a fake port installation, that is, simulate the port in- stallation under ${WRKINST}. There is no do-fake and post-fake hooks. fake actually uses pre-fake, pre-install, do-install and post-install. Described in a separate sec- tion below. fetch Fetch the distribution files and patchfiles, using ${FETCH_CMD}. Each file of the DISTFILES and PATCHFILES lists is retrieved, if necessary, from the list of sites in MASTER_SITES. If a filename ends with a ':0' to ':9' exten- sion, it will be retrieved from MASTER_SITES0 to MASTER_SITES9 instead. The ports framework uses ${DISTDIR}/${DIST_SUBDIR} (aliased to ${FULLDISTDIR}) to cache the ports distribution files and patch files. Note that this framework is also used by mirroring scripts, which will also retrieve SUPDISTFILES, to fill with supple- mentary distribution files which are not needed for every configuration. See ALLFILES, CDROM_SITE, DISTDIR, DISTFILES, DIST_SUBDIR, FETCH_CMD, FETCH_MANUALLY, FETCH_SYMLINK_DISTFILES, FULL_DISTDIR, MASTER_SITES, MASTER_SITES0, ..., MASTER_SITES9, PATCH_FILES, SUPDISTFILES, REFETCH. install Install the package after building. See the description of THE FAKE FRAMEWORK for the non-intuitive details of the way {pre,do,post}-install hooks are actually used by the ports tree. lib-depends Verify that the library dependencies a port needs are actu- ally there, by checking the library specifications. lib-depends-check Verify that the LIB_DEPENDS hold all shared libraries used for the port. See library-specs(7). license-check Check that PERMIT_PACKAGE_* settings match: if any depen- dency has a more restrictive setting, warn about it. This warning is advisory, because the automated license checking cannot figure out which ports were used only for building and did not taint the current port. link-categories Create symbolic links in other directories that correspond to the port's CATEGORIES. Note that this does not affect bulk package building, since those links don't appear in the upper-level Makefiles. See also unlink-categories. makesum Create the ${CHECKSUM_FILE} list of recorded checksums by running the cryptographic fingerprints (hashes) rmd160, tiger, sha1, sha256 and md5 on ${ALLFILES} minus ${IGNORE- FILES}. NO_CHECKSUM can be used to avoid all checksumming steps. manpages-check Verify that makewhatis(8) can do a correct job with the port's manpages. package Build a port package (or packages in MULTI_PACKAGES cases) from the fake installation. Involves creating packaging in- formation from templates (see COMMENT, SED_PLIST, SUBST_VARS among others) and invoking pkg_create(1) for each package in the MULTI_PACKAGES list. If ${PERMIT_PACKAGE_FTP} is set to 'Yes', copies built pack- ages into ${FTP_PACKAGES}, using hard links if possible. If ${PERMIT_PACKAGE_CDROM} is set to 'Yes', copies built pack- ages into ${CDROM_PACKAGES}, using hard links if possible. patch Apply distribution and OpenBSD specific patches. Because of historical accident, patch does not follow the exact same scheme other standard targets do. Namely, patch invokes pre-patch (if defined), do-patch, and post-patch, but the default do-patch target invokes distpatch directly. So, if the do-patch target is overridden, it should still begin by calling mmake distpatch, before applying OpenBSD specific patches. Accordingly, the exact sequence of hooks is: pre-patch, do-distpatch, post-distpatch, do-patch, post-patch. If ${PATCHDIR} exists, the files described under PATCH_LIST will be applied under WRKDIST. readmes Create an html description of packages, including comments, description, and dependencies. rebuild Force rebuild of the port. regress Run regression tests for the port. Essentially depend on a correct build and invoke env -i ${MAKE_ENV} ${MAKE_PROGRAM} ${REGRESS_FLAGS} \ -f ${MAKE_FILE} ${REGRESS_TARGET} If a port needs some other ports installed to run regres- sion tests, use REGRESS_DEPENDS. If a port needs special configuration or build options to enable regression test- ing, define a 'regress' FLAVOUR. regress-depends Verify packages needed for regression tests, using the same scheme as build-depends. Only invoked when regression tests are run, or explicitly through depends. reinstall Force reinstallation of a port, by first cleaning the old installation. repackage For rebuilding of the packages of a port, by first removing the old packages. run-depends Verify the ports mentioned in RUN_DEPENDS, by checking the corresponding packages are actually installed, and install the missing ports by recursing through the ports tree. In- voked right before installing the package. show Invoked as mmake show=name, show the contents of ${name}. Invoked as mmake show="name1 name2 ...", show the contents of ${name1} ${name2} ..., one variable value per line. Mostly used from recursive makes, or to know the contents of another port's variables without guessing wrongly. unlink-categories Remove symbolic links in other directories that correspond to the port's CATEGORIES. See also link-categories. update-patches Create or update patches for a port, using diff(1) between file and file.orig, based on file.orig existence. In order to generate a patch, the original file needs to be named file.orig and file edited. After the target is invoked, the patches are placed under the patches/ directory. It moves existing patches from patch-file to patch-file.orig update-plist, plist Update the packing lists for a port, using the fake instal- lation and the existing packing lists. update-plist should produce mostly correct PLIST, PFRAG.shared and PFRAG.no-shared files, handling shared libraries, GNU info(1) files, setuid files, and empty directories. It moves existing files to PLIST.orig, PFRAG.shared.orig and PFRAG.no-shared.orig. If the generated lists include files and directories that shouldn't be included, comment these like this: @comment unwanted-file @comment @dirrm unwanted-dir Subsequent calls to update-plist will automatically recog- nize and handle such lines correctly. update-plist does not handle flavour situations yet, so beware.
These variables should not be modified by individual ports. You can add them to the command line, set them as (exported) environment variables, or put them into mmake.cfg(5) to tune various options. BATCH Set to 'Yes' to avoid ports that require user-interaction Use in conjunction with INTERACTIVE to simplify bulk- package builds. (See IGNORE). BIN_PACKAGES If set to 'Yes', the package target will trust a package built in the repository to be up-to-date, and will not re- build it if the work directory is absent. See also BULK, TRUST_PACKAGES. BULK If set to 'Yes', successful package builds and installa- tions will clean their working directories, after invoking any targets mentioned in BULK_TARGETS. Can be set on a per-${PKGPATH} basis. For instance, setting BULK_misc/screen=No will override any BULK=Yes passed on the command line. See BULK_COOKIES_DIR, BIN_PACKAGES, TRUST_PACKAGES. BULK_COOKIES_DIR Used to store cookies for successful bulk-package builds, defaults to ${PORTSDIR}/Bulk. CDROM_PACKAGES Base location where packages suitable for a CDROM (see PERMIT_PACKAGE_CDROM) will be placed (default: ${PKGREPOSITORY}/CDROM) COPTS Supplementary options appended to ${CFLAGS} for building. Since most ports ignore the COPTS convention, they are ac- tually told to use ${CFLAGS} ${COPTS} as CFLAGS. CXXOPTS Supplementary options appended to ${CXXFLAGS} for building. ECHO_MSG Used to display '===> Configuring for foo' and similar in- formative messages. Override to turn off, for instance. FETCH_CMD Command used to fetch distribution files for this port. De- faults to ftp(1). Actually, this is "ftp -o". Can be used to go through excessively paranoid firewalls. FETCH_SYMLINK_DISTFILES Set to '' to copy distribution files off CDROM_SITE instead of symlinking them. FTP_PACKAGES Base location where packages suitable for ftp (see PERMIT_PACKAGE_FTP) will be placed (default: ${PKGREPOSITORY}/FTP) INTERACTIVE Set to 'Yes' to skip all non-interactive ports. Used in conjunction with BATCH to simplify bulk-package builds. NO_CHECKSUM Set to 'Yes' to avoid checksum, makesum, and addsum actions entirely. Beware of the full implications of this mechan- ism, namely that it disables entirely the basic authentica- tion mechanisms of the ports tree. NO_DEPENDS Don't verify build of dependencies. Do not use in any ports Makefile. This is only meant as a user convenience when, e.g., you just want to browse through a given port's source and do not wish to trigger the build of dependencies. NO_IGNORE If set to 'Yes', avoid ignoring a port for the usual rea- sons. Use, for instance, for fetching all distribution files, or for fixing a broken port. See also IGNORE. PKG_DBDIR Path to package installation records. Defaults to /var/db/pkg. PKGREPOSITORY Location for packages built (default ${PORTSDIR}/Packages). REFETCH If set to true, checksum will analyze ${CHECKSUM_FILE}, and try retrieving files with the correct checksum off, in the directory /pub/OpenBSD/distfiles/$cipher/$value/$file. REPORT_PROBLEM A command which is run when a port build fails during a re- cursive run, i.e. when running mmake in a subdirectory or for a whole category. The default is to abort the build; during bulk builds, the port name is written to a file in ${PORTSDIR} called Failed, and the build is continued at the next port. SUDO If set to sudo(8) in mmake.cfg(5), the ports tree will only invoke root's privileges for the parts that really require it. TEMPLATES Base location for the templates used in the readmes target. TRUST_PACKAGES If set to 'Yes', dependency mechanisms will assume the da- tabase of installed packages is correct. See also BIN_PACKAGES, BULK. WARNINGS If set to 'Yes', add CDIAGFLAGS to CFLAGS and CXXDIAGFLAGS to CXXFLAGS.
Note that some variables are marked as 'read-only', which means that they shouldn't ever be changed. show Invoked as mmake show=name, show the contents of ${name}. Invoked as mmake show="name1 name2 ...", show the contents of ${name1} ${name2} ..., one variable value per line. ALLFILES List of all files that need to be retrieved by fetch, with master site selection extension removed. Read-only. ALL_TARGET Target used to build software. Default is 'all'. Can be set to empty, to yield a package's default target. ARCH Current machine architecture (read-only). AUTOCONF Location of the autoconf binary if needed. Defaults to au- toconf (though mmake autoreconf might be more appropriate). AUTOCONF_DIR Where to invoke autoconf if ${CONFIGURE_STYLE} includes au- toconf. Defaults to ${WRKSRC}. AUTOCONF_NEW For 'CONFIGURE_STYLE=autoconf', specify which version of autoconf to use. 'no' selects the older version 2.13, 'yes' selects the current version from the 2.5x series. If auto- conf must be run manually, MODGNU_AUTOCONF_DEPS can be used to specify what packages to depend upon. AUTOCONF_VERSION Manually set the autoconf version to use. Please note that this variable is deprecated in MirPorts, and AUTOCONF_NEW (see above) should be used. AUTOGEN For 'CONFIGURE_STYLE=autogen', set the name of the autogen.sh file to call. The default is to use a standard version from the MirPorts infrastructure. BROKEN Define only for broken ports, set to reason the port is broken. See also NO_IGNORE. BSD_INSTALL_{PROGRAM,SCRIPT,DATA,MAN}[_DIR] Macros passed to mmake and configure invocations. Set based on corresponding INSTALL_* variables. BUILD_DEPENDS List of other ports the current port needs to build correctly. Each item has the form '[legacy]:[pkgspec]:directory[,-subpackage][,flavour ...][:target]'. 'target' defaults to 'install' if it is not specified. 'legacy' used to be a file to check. The ports tree now uses 'pkgspec' instead, as a package that must be installed prior to the build. 'directory' is set relative to ${PORTSDIR}. 'subpackage' is an optional subpackage name, to install instead of the default main package name. 'flavour ...' is a comma separated list of flavours. By de- fault, the dependency will build the default flavour. Build and lib dependencies are checked at the beginning of the extract stage. Build dependencies that are not the default package or install target will be processed in a subdirectory of the working directory, specifically, in ${WRKDIR}/directory. BULK_FLAGS Flags to pass to build each target in BULK_TARGETS. BULK_TARGETS Targets to run after each bulk package build before clean- ing up the working directory. Empty defaults. Can be set on a per-${PKGPATH} basis, e.g., BULK_TARGETS_${PKGPATH}=... BZIP2 Name of the bzip2 binary. B_R_DEPENDS List of other ports the current port needs to build and run correctly. The contents of this variable is internally sim- ply added to the BUILD_DEPENDS and RUN_DEPENDS variables, see there for reference. CATEGORIES List of descriptive categories into which this port falls. Mandatory. See link-categories, unlink-categories. CDIAGFLAGS Flags appended to CFLAGS if WARNINGS is set. CDROM_SITE Path to a local database that holds distribution files (usually a CD-ROM or other similar media), used to retrieve distribution files before going to the network. Defaults to /cdrom/distfiles. Set to empty to avoid checking any path. Distribution files are still copied or linked (see FETCH_SYMLINK_DISFILES) into DISTDIR if they are found under CDROM_SITE. CFLAGS Default flags passed to the compiler for building. Many ports ignore it. See also COPTS, CDIAGFLAGS. CHECKSUM_FILE Location for this port's checksums, used by addsum, checksum, and makesum. Defaults to distinfo. CLEANDEPENDS If set to 'Yes', 'mmake clean' will also clean dependen- cies. Can be overridden on a per-${PKGPATH} basis, by set- ting CLEANDEPENDS_${PKGPATH}. COMMENT Comment used for the package, and in the INDEX. COMMENT-foo Comment used for sub package foo in a multi-package set up. COMMENT-vanilla Comment used for a flavoured package, if the non-flavoured comment is inappropriate. COMMENT-foo-vanilla Comment used for a sub-, flavoured package. COMES_WITH (deprecated) The first release where the port was made part of the standard OpenBSD distribution. Normally not used in MirPorts. CONFIGURE_ARGS Arguments to pass to configure script. Defaults are empty, except for gnu-style configure, where prefix and sysconfdir are set. CONFIGURE_ENV Basic environment passed to configure script (path and lib- tool setup). gnu-style configure adds a lot more variables. CONFIGURE_SCRIPT Set to name of script invoked by configure target, if ap- propriate. Should be relative to ${WRKSRC}. CONFIGURE_SHARED Set to --enable-shared or --disable-shared, depending on whether the architecture supports shared libraries. Should be appended to CONFIGURE_ARGS, for ports that build dynamic libraries and whose configure script supports these op- tions. CONFIGURE_STYLE Set to style of configuration that needs to happen. bmake Run make obj as configure script and make depend at pre-build time. perl Assume perl(1) ExtUtils::MakeMaker(3p) style. This is the style used by most Perl modules with a Makefile.PL. p5 Same as perl, but does not create any targets, just the variables. modbuild Perl modules using Module::Build(3), i.e. with a Build.PL file. gnu Assume gnu configure style. dest Add this if the port does not handle DESTDIR correctly, and needs to be configured to add DESTDIR to prefixes (see also DESTDIRNAME). old Add this if the port is an older autoconf port that does not recognise --sysconfdir. autoconf autoheader and autoconf needs to be rerun first (implies gnu). Before running the two, the auto- conf infrastructure is updated to MirLibtool. no-autoheader (with autoconf) Do not run autoheader. Use this for broken ports if autoheader aborts with an er- ror message. automake automake may need to be rerun. Please note that you still must do this manually. Otherwise, au- tomake will be explicitly disabled. autogen Run autogen.sh before invoking configure. Normal- ly, this script runs aclocal, automake, autoheader, and autoconf. If AUTOGEN is not set, an autogen.sh script from the MirPorts infras- tructure is run. imake assume the port configures using X11 ports Imakefile framework. noman (with imake) The port has no man pages the Imakefile should try installing. simple There is a configure script, but it does not fit the normal gnu configure conventions. CVS_DIST* Enables an operation mode where the distfile is fetched from CVS. The following variables can be set by the user: CVS_DISTDATE The value of the -D option to cvs checkout. Optional, but either this or CVS_DISTTAGS must be specified. CVS_DISTFILE The file name prefix. Optional (derived from CVS_DISTMODS), unless more than one module is checked out. CVS_DISTMODS Module(s) to check out, separated by whi- tespace. CVS_DISTREPO Repository to check out from, i.e. the value of the -d option to cvs. If it does not start with a colon, CVSREADONLYFS is set automati- cally, which is not compatible with the OpenBSD version of anoncvssh. CVS_DISTTAGS The value of the -r option to cvs checkout. Optional, but either this or CVS_DISTDATE must be specified. For the full documentation for these variables, see cvs-distfiles(7). WRKDIST is set automatically to ${WRKDIR}/${CVS_DISTMODS}. PKGNAME defaults to ${_CVS_DISTF:R}-${DASH_VER}. CXXDIAGFLAGS Flags appended to CXXFLAGS if WARNINGS is set. CXXFLAGS Default flags passed to the C++ compiler for building. Many ports ignore it. DASH_VER A numeric, decimal, value that is the last component of ${PKGNAME} without any flavour or subpackage extensions. This replaces the former "package patchlevel". Default is 0. DEF_UMASK Correct value of umask for the port to build and package correctly. Tested against the actual umask at fake time. Default is 022. Don't override. DESTDIR See DESTDIRNAME. DESTDIRNAME Name of variable to set to ${WRKINST} while faking. Usually DESTDIR. To be used in the rare cases where a port heeds DESTDIR in a few directories and needs to be configured with 'gnu dest', so that those few directories do not get in the way. DISTDIR Directory where all ports distribution files and patchfiles are stashed. Defaults to ${PORTSDIR}/Distfiles. Override if distribution files are stored elsewhere. Always use FULLDISTDIR to refer to ports' distribution files location, as it takes an eventual DIST_SUBDIR into account. DISTFILES The main port's distribution files (the actual software source, except for binary-only ports). Will be retrieved from the MASTER_SITES (see fetch), checksummed and extract- ed (see checksum, extract). DISTFILES normally holds a list of files, possibly with ':0' to ':9' appended to select a different MASTER_SITES. See also SUPDISTFILES. DISTNAME Name used to identify the port. See DISTFILES and PKGNAME. DISTORIG Suffix used by distpatch to rename original files. Defaults to .bak.orig. Distinct from .orig to avoid confusing update-patches. DIST_SUBDIR Optional subdirectory of ${DISTDIR} where the current port's distribution files and patchfiles will be located. See target fetch. EMUL List of binary emulations (kernel personalities) the software needs. Only necessary for binary-only software. If the necessary binary emulation is turned off or not avail- able, the port is ignored. ERRORS List of errors found while parsing the port's Makefile. Display the errors before making any target, and if any er- ror starts with "Fatal:", do not make anything. For in- stance: .if !defined(COMMENT) ERRORS+="Fatal: Missing comment" .endif EXTRACT_CASES In the normal extraction stage (when EXTRACT_ONLY is not empty), this is the contents of a case statement, used to extract files. Fragments are automatically appended to ex- tract tar and zip archives, so that the default case is equivalent to the following shell fragment: set -e cd ${WRKDIR} for archive in ${EXTRACT_ONLY} do case $$archive in *.zip) unzip -q ${FULLDISTDIR}/$$archive -d ${WRKDIR};; *.tar.bz2) bzip2 -dc ${FULLDISTDIR}/$$archive | tar xf -;; *.shar.gz|*.shar.Z|*.sh.Z|*.sh.gz) gzcat ${FULLDISTDIR}/$$archive | /bin/sh;; *.shar|*.sh) /bin/sh ${FULLDISTDIR}/$$archive;; *.tar) tar xf ${FULLDISTDIR}/$$archive;; *) gzip -dc ${FULLDISTDIR}/$$archive | tar xf -;; esac done EXTRACT_ONLY Set if not all ${DISTFILES} should be extracted at do-extract stage. Default value is ${DISTFILES}. EXTRACT_SUFX Used to set DISTFILES default value to ${DISTNAME}${EXTRACT_SUFX}. Default value is .tar.gz. FAKE Automatically set to 'Yes' for most ports (and all new ports). Indicates that the port, using FAKE_FLAGS magic, will properly fake installation into ${WRKINST}, to be packaged and properly installed from the package. Set to 'No' in very rare cases, and during port creation. FAKE_FLAGS Flags passed to ${MAKE_PROGRAM} on fake invocation. By de- fault, ${DESTDIRNAME}=${WRKINST}. FAKE_TARGET Target build by ${MAKE_PROGRAM} on fake invocation. De- faults to ${INSTALL_TARGET}. FAKEOBJDIR If non empty, used as a base for the fake area. The real fake directory ${WRKINST} is created there. Can be set on a per-${PKGPATH} basis. For instance, setting FAKEOBJDIR_www/mozilla=/tmp/obj will affect only the mozil- la port. FETCH_MANUALLY Some ports' distfiles cannot be fetched automatically for licensing reasons. In this case, set FETCH_MANUALLY to a list of strings that will be displayed, one per line, e.g., FETCH_MANUALLY= "You must fetch foo-1.0.tgz" FETCH_MANUALLY+="from manually," FETCH_MANUALLY+="after reading and agreeing to the license." Automatically sets IS_INTERACTIVE if some distribution files are missing. FILESDIR Location of other files related to the current ports. (de- fault: files.${ARCH} or files). FLAVOUR The port's current options. Set by the user, and tested by the port to activate wanted functionalities. FLAVOURS List of all flavours keywords a port may match. Used to sort FLAVOUR into a canonical order to build the package name, or to select the packing-list, and as a quick validi- ty check. See also PSEUDO_FLAVOURS. FLAVOUR_EXT Canonical list of flavours being set for the current build, dash-separated. See FULLPKGNAME. FULLPKGNAME Full name of the main created package, taking flavours into account. Defaults to ${PKGNAME}${FLAVOUR_EXT}. FULLPKGNAME-foo Full package name for sub-package foo, if the default value is not appropriate. FULLPKGPATH Path to the current port's directory, relative to ${PORTSDIR}, including flavours and subpackages. Read-only. GMAKE Location of the gnu make binary, if needed. Defaults to gmake. HOMEPAGE Set to a link to the homepage of the software, if applica- ble. IGNOREFILES Set to the list of files that cannot be checksummed. For use by ports which fetch dynamically generated archives that can't be checksummed. Avoid using IGNOREFILES whenever possible. LIB_DEPENDS Libraries this port depends upon. These are also always build dependencies, and added to BUILD_DEPENDS automatical- ly, if required. Each item has the form 'lib_specs:[pkgspec]:directory[,-subpackage][,flavour ...][:target]'. Similar to BUILD_DEPENDS, except for 'lib_specs', which is a comma-separated list of 'lib_spec' of the form: 'libname.[version.[subversion]]'. See library-specs(7) for more details. On architectures that use dynamic libraries, LIB_DEPENDS is also used as a run-time dependency, and recorded in the package as such. FULLDISTDIR Complete path to directory where ${DISTFILES} and ${PATCH- FILES} will be located, to be used in hand-crafted extrac- tion targets (read-only). IGNORE The port is ignored and ${IGNORE} is printed if defined. Usually set to the reason the port is ignored. See also BATCH, BROKEN, IGNORE_SILENT, INTERACTIVE, IS_INTERACTIVE, NOT_FOR_ARCHS, NO_IGNORE, ONLY_FOR_ARCHS, USE_X11. IGNORE_SILENT If set to 'Yes', do not print anything when ignoring a port. INSTALL_{PROGRAM,SCRIPT,DATA,MAN}[_DIR] Macros to use to install a program, a script, data, or a man page (or the corresponding directory), respectively. INSTALL_TARGET Target invoked to install the software, during fake instal- lation. Default is 'install'. IS_INTERACTIVE Set to 'Yes' if port needs human interaction to build. Usu- ally implies NO_PACKAGE as well. Porters should strive to minimize IS_INTERACTIVE ports, by using FLAVOURS for multi- ple choice ports, and by postponing human intervention to package installation time. LIBTOOL Location of the libtool binary for ports that set USE_LIBTOOL (default: ${LOCALBASE}/bin/libtool). LIBTOOL_FLAGS Arguments to pass to libtool. If USE_LIBTOOL is set, the environment variable LIBTOOL is set to ${LIBTOOL} ${LIBTOOL_FLAGS}. LOCALBASE where other ports have already been installed (default: /usr/local) MAINTAINER E-mail address with full name of the port's maintainer for OpenBSD. For political reasons, this should NEVER be set in MirPorts. When adding ports from OpenBSD, be sure to delete this line. The MirPorts equivalent is called RESPONSIBLE. MAKE_ENV Environment variables passed to make invocations. Sets at least PATH, PREFIX, LOCALBASE, X11BASE, CFLAGS, TRUEPREFIX, DESTDIR, and the BSD_INSTALL_* macros. MAKE_FLAGS Flags used for all make invocations, except for the fake stage, which uses FAKE_FLAGS, and for the regress stage, which uses REGRESS_FLAGS. MAKE_FILE Name of the Makefile used for ports building. Defaults to Makefile. Used after changing directory to ${WRKBUILD}. MAKE_PROGRAM The make program that is used for building the port. Set to ${MAKE} or ${GMAKE} depending on USE_GMAKE. Read-only. MASTER_SITES List of primary location from which distribution files and patchfiles are retrieved. See the fetch target for more de- tails. See ports(7) for user configuration. MASTER_SITES0, ..., MASTER_SITES9 Supplementary locations from which distribution files and patchfiles are retrieved. MCZ_FETCH For _CVS_DISTF operation, i.e. if one of CVS_DISTREPO or SVN_DISTPATH is set, control the way how these VCS dist- files are fetched. This variable affects all distfiles, i.e. _CVS_DISTF, _CVS_DISTF0, ..., _CVS_DISTF9. Possible values are: No Directly fetch from the VCS and compress using mpczar(1). This is the default. Yes Fetch from the appropriate MASTER_SITES, which is in- itialised to default to ${_MASTER_SITE_MIRBSD}. LZMA Append the suffix ".lzma" to the _CVS_DISTF, and fetch it from the appropriate MASTER_SITES, which is initialised to default to ${_MASTER_SITE_MIRBSD}. Note that the distfile has to be recompressed manual- ly by the master site administrator, and due to use of a different compressor than the mpczar.z helper, it is not guaranteed to be reproducible. xz Append the suffix ".xz" to the _CVS_DISTF, and fetch it from the appropriate MASTER_SITES, which is ini- tialised to default to ${_MASTER_SITE_MIRBSD}. Note that the distfile has to be recompressed manual- ly by the master site administrator, and due to use of a different compressor than the mpczar.z helper, it is not guaranteed to be reproducible. MESSAGE File recorded in the package and displayed during installa- tion. Defaults to ${PKGDIR}/MESSAGE if this file exists. Leave empty if no message is needed. MODGNU_RECURSE_DIRS If a port uses more than one configure script, set this to all directories that contain one. Copies config.guess to the right places; if CONFIGURE_STYLE is autoconf, autoconf is also called in the given directories, and so on for au- togen, autoheader, MirLibtool conversion, etc. MODGNU_TYPESCAN Set this to the names of functions where the configure script scans for types of its arguments, to prevent bogus warnings such as: conftest.c:141: error: conflicting types for 'recv' /usr/include/sys/socket.h:451: error: previous declaration of 'recv' was here These warnings are grepped from the config.log file in case someone forgot to run our patched version of ports/devel/autoconf/2.{13,61} on the script, which prevents mis-not-detection of functions. MODSIMPLE_USE_INSTALL Set to the install(1) programme to use to copy files for installation. Default: ${INSTALL} -c -o ${BINOWN} -g ${BINGRP} MOTIFLIB Read-only. Correct incantation to link with motif. MTREE_FILE mtree(8) specification to check when creating a PLIST with the update-plist target. MTREE_FILE can hold a list of file names, to which ${PORTSDIR}/infrastructure/db/fake.mtree is always appended. These specifications are rooted at ${WRKINST}, and are subject to SUBST_VARS substitution, to ease ${PREFIX} independence. This feature is primarily in- tended for large, interconnected ports, such as the kde suite, where a base package sets up a large, extra directo- ry hierarchy that would make the manual checking of packing lists tedious. MULTI_PACKAGES Set to a list of package extensions for ports that create multiple packages. See "Flavours and multi-packages" below. NOT_FOR_ARCHS List of architectures on which this port does not build. See also ONLY_FOR_ARCHS. NO_BUILD Port does not need any build stage. NO_REGRESS Port does not have any regression tests. NO_SHARED_ARCHS Set to the list of platforms that do not support shared li- braries. Use with NOT_FOR_ARCHS. NO_SHARED_LIBS Set to 'Yes' if platform does not support shared libraries. To be tested after including bsd.port.mk, if neither PFRAG.shared nor CONFIGURE_SHARED are enough. NO_SYSTRACE Port does not build with systrace enabled build targets. ONLY_FOR_ARCHS List of architectures on which this port builds. Can hold both processor-specific information (e.g., m68k), and more specific model information (e.g., hp300). OPSYS Always OpenBSD (read-only). OPSYS_VER Revision number of OpenBSD (read-only). PACKAGING Defined while building packages, read-only. See the description of FLAVOURS AND MULTI_PACKAGES for a detailed explanation. PATCH Command to use to apply all patches. Defaults to /usr/bin/patch. PATCHORIG Suffix used by patch to rename original files, and update-patches to re-generate ${PATCHDIR}/${PATCH_LIST} by looking for files using this suffix. Defaults to .orig. For a port that already contains .orig files in the ${DIST- FILES}, set this to something else, such as .pat.orig. See also distpatch, DISTORIG. PATCH_CASES In the normal distpatch stage (when PATCHFILES is not emp- ty), this is the contents of a case statement, used to ap- ply distribution patches. Fragments are automatically ap- pended to handle compressed patches, so that the default case is equivalent to the following shell fragment: set -e cd ${FULLDISTDIR} for patchfile in ${_PATCHFILES} do case $$patchfile in *.bz2) bzip2 -dc $$patchfile | ${PATCH} ${PATCH_DIST_ARGS};; *.Z|*.gz) gzcat $$patchfile | ${PATCH} ${PATCH_DIST_ARGS};; *) ${PATCH} ${PATCH_DIST_ARGS} <$$patchfile;; esac done PATCHDIR Location for patches applied by patch target (default: patches.${ARCH} or patches). PATCHFILES Files to fetch from the master sites like DISTFILES, but serving a different purpose, as they hold distribution patches that will be applied at the patch stage. See also SUPDISTFILES. PATCH_ARGS Full list of options used while applying port's patches. PATCH_CHECK_ONLY Set to Yes by the checkpatch target. Don't touch unless the default checkpatch target needs to be redefined. Ideally, user-defined patch subtargets ought to test checkpatch. In practice, they don't. PATCH_DEBUG If set to 'Yes', the patch stage will output extra debug information. PATCH_DIST_ARGS Full list of options used while applying distribution patches. PATCH_DIST_STRIP Patch option used to strip directory levels while applying distribution patches. Defaults to -p0 . PATCH_LIST Wildcard pattern of patches to select under ${PATCHDIR} (default: patch-*). Note that filenames ending in .orig or ~ are never applied. Note that PATCH_LIST can hold absolute pathnames, for instance to share patches among similar ports: PATCH_LIST=${PORTSDIR}/x11/kde/libs2/patches/p-* patch-* PATCH_STRIP Patch option used to strip directory levels while applying port's patches. Defaults to -p0 . PKG_ARCH Comma-separated list of architectures on which this package may install. Defaults to ${MACHINE_ARCH},${ARCH}. Use * to mean any arch. PKG_SUFX Extension and format to choose for packages. Possible values are: .cgz sv4cpio with CRC, compressed with gzip(1). Default on all systems except OpenBSD. .clz sv4cpio with CRC, compressed with LZMA-Alone. Re- quires lzmadec to be installed to handle, lzma and ~320M datasize ulimit to create, no automatic depen- dency. .cxz sv4cpio with CRC, compressed with LZMA2 (xz). Re- quires xzdec to be installed to handle, xz and 800 MiB datasize ulimit to create (although, if less is available, xz will degrade compression quality au- tomatically), no automatic dependency. .cpio sv4cpio with CRC, uncompressed. .tgz ustar, compressed with gzip(1). Default on OpenBSD. .tar ustar, uncompressed. .tar.gz ustar, compressed with gzip(1). Legacy use only, will go away. PORTHOME Setting of env variable HOME for most shell invocations. Default will trip ports that try to write into $HOME while building. PORTPATH Path used by most shell invocations. Don't override unless really needed. PORTSDIR Root of the ports tree (default: /usr/ports). PORTSDIR_PATH Path used by dependencies and bsd.port.subdir.mk to look up package specifications. Defaults to ${PORTSDIR}:${PORTSDIR}/Mystuff. PKGDIR Location for packaging information (packing list, port description, port short description). Default: pkg.${ARCH} or pkg. PKGNAME Name of the main created package. Default is ${DISTNAME}-${DASH_VER} for the main package, and ${DISTNAME}-${DASH_VER} for multi-package ports. This does not take flavours into account. See FULLPKGNAME for that. Defaults to ${_CVS_DISTF:R}-${DASH_VER} for CVS distfiles and ${DIST_NAME}-${DIST_DATE}-${DASH_VER} for non-distfile ports. PKGNAME-foo Package name for sub-package foo, if the default value of ${PKGNAME}${SUBPACKAGE} is not appropriate. PKGPATH Path to the current port's directory, relative to ${PORTSDIR}. Read-only. PREFIX Base directory for the current port installation. Usually ${LOCALBASE}, though some ports may elect a location under /var, and some multi-package ports may install under several locations. PSEUDO_FLAVOURS Extra list of flavours that do not register in package names, but are still used to control build logic, and e.g., working directory names. Its main use is for disabling part of a multi-packages build, for instance: FLAVOUR=no_gnome mmake package Creation of a separate working directory is mandatory. If, at a later time, a full build with all subpackages is re- quired, all the work will need to be done again. REGRESS_DEPENDS See BUILD_DEPENDS for specification. Regress dependencies are only checked if the regress stage is invoked. REGRESS_FLAGS Flags to pass to ${MAKE_PROGRAM} to run the regression tests. Defaults to ${MAKE_FLAGS}. REGRESS_IS_INTERACTIVE Set to 'Yes' if port needs human interaction to run its tests. REGRESS_TARGET Target to run regression tests. Defaults to 'regress', ex- cept for 'perl' and 'gnu' CONFIGURE_STYLE, which default to 'test' and 'check' respectively. RESPONSIBLE The name and e-mail address of the person maintaining the port for MirPorts. The default is the miros-discuss list. RUN_DEPENDS Specification of ports this port needs installed to be functional. Same format as BUILD_DEPENDS. The corresponding packages will be built at install stage, and pkg_add(1) will take care of installing them. SED_PLIST Pipeline of commands used to create the actual packing list from the PLIST template (usually ${PKGDIR}/PLIST). bsd.port.mk appends to it substitution commands correspond- ing to the port's FLAVOUR and variables from SUBST_VARS. ${SED_PLIST} is invoked as a pipeline after inserting PFRAG.shared fragments. SCRIPTDIR Location for scripts related to the current port (default: scripts.${ARCH} or scripts). SVN_DIST* Enables an operation mode where the distfile is fetched from Subversion. The following variables can be set by the user: SVN_DISTDIR Name of the top-level directory to check out to. Optional (derived from the basename of SVN_DISTPATH). If this is only a dash ('-'), a single-file checkout is done. SVN_DISTFILE The file name prefix. Optional (derived from the basename of SVN_DISTDIR). SVN_DISTPATH Repository to check out from (full path). SVN_DISTREV Revision to check out. For the full documentation for these variables, see cvs-distfiles(7). WRKDIST is set automatically to ${WRKDIR}/${SVN_DISTDIR}. PKGNAME defaults to ${SVN_DISTFILE}-${SVN_DISTREV}-${DASH_VER}. SUBPACKAGE Set to the sub package suffix when building a package in a multi-package port. Read-only. Used to test for dependen- cies or to adjust the package name. SUBST_VARS Make variables whole values get substituted to create the actual package information. Always holds ARCH, FLAVOUR_EXT, HOMEPAGE, MACHINE_ARCH, MAINTAINER, PREFIX, and SYSCONFDIR. The special construct '${FLAVOURS}' can be used in the packing-list to specify the current list of dash separated flavours the port is compiled with (useful for cross- dependencies in MULTI_PACKAGES). Add other variables as needed. SUPDISTFILES Supplementary files that need to be retrieved under some specific circumstances. For instance, a port might need architecture-specific files. SUPDISTFILES should hold a list of all distribution files and patchfiles that are not always needed, so that a mirror will be able to grab all files, or that makesum will work. Having an overlap between SUPDISTFILES and DISTFILES, PATCHFILES is admissible, and in fact, expected, as it is much simpler to build an error-free list of files to retrieve in that way. See the xanim port for an example. SYSCONFDIR Location for ports system configuration files. Defaults to /etc, should never be set to /usr/local/etc. SYSTRACE_FILTER Location of the systrace filter file which is the basis for a port's actual systrace policy file. Defaults to ${PORTSDIR}/infrastructure/db/systrace.filter. SYSTRACE_SUBST_VARS List of variables used in ${SYSTRACE_FILTER} that will be substituted by their real value when creating the systrace policy file. Always holds WRKOBJDIR, PORTSDIR, and DISTDIR. TAR Name of the tar binary. UNZIP Name of the unzip binary. USE_CXX Set to 'Yes' if the port needs a C++ compiler to build. On systems without a C++ compiler, this shows a fatal error. USE_GMAKE Set to 'Yes' if gnu make (${GMAKE}) is needed for correct behavior of this port. USE_LIBTOOL Set to 'Yes' if libtool is required for correct behavior of this port. Adds correct dependencies, and passes LIBTOOL environment variable to scripts invocations. USE_MOTIF Set to 'any' if port works with any version of motif; 'lesstif' if port requires lesstif to work; 'openmotif' if ports requires openmotif to work. The 'any' setting creates an extra flavour choice of 'lesstif'. See also MOTIFLIB USE_SYSTRACE Set to 'Yes' to protect port building with systrace. Set by the user, e.g. in /etc/mmake.cfg. USE_X11 Set to 'Yes' if port requires X11 to work. VMEM_WARNING Set to 'Yes' if the port requires a lot of memory to com- pile, and the user is likely to see a message like "virtual memory exhausted" with the default process limits. WRKBUILD Subdirectory of ${WRKDIR} where the actual build occurs. Defaults to ${WRKSRC}. WRKDIR Location where all port activity occurs. Apart from the ac- tual port, may hold all kinds of cookies that checkpoint the port's build. Read-only. Ports that need to know the WRKDIR of another port must use cd that_port_dir && mmake show=WRKDIR for this. Note that WRKDIR may be a symbolic link. WRKDIST Subdirectory of ${WRKDIR} in which the distribution files normally unpacks. Base for all patches (default: ${WRKDIR}/${DISTNAME}). Note that WRKDIST may be a symbolic link, if set to ${WRKDIR}. WRKSRC Subdirectory of ${WRKDIR} where the actual source is. Base for configuration (default: ${WRKDIST}) Note that WRKSRC may be a symbolic link, if set to ${WRKDIR}. WRKPKG Subdirectory of ${WRKBUILD} where package information gets generated. Defaults to ${WKRBUILD}/pkg, do not override un- less 'pkg' conflicts with the port's conventions. WRKINST Subdirectory of ${WRKDIR} where port normally installs (see fake target). WRKOBJDIR If non empty, used as a base for the actual port working directory. The real working directory ${WRKDIR} is created there. Can be set on a per-${PKGPATH} basis. For instance, setting WRKOBJDIR_www/mozilla=/tmp/obj will affect only the mozilla port. X11BASE Where X11 has been installed (default: /usr/X11R6). XMKMF Invocation of xmkmf for CONFIGURE_STYLE=imake port. De- faults to xmkmf -a -DPorts. The -DPorts is specific to OpenBSD and is always appended. YACC Name of yacc program to pass to gnu-configure, defaults to yacc. (gnu-configure would always try to use bison other- wise, which leads to unreproducible builds.) Set to bison if needed.
../Makefile.inc Common Makefile fragment for a set of ports, included automat- ically. /cdrom/distfiles Default path to a CD-ROM (or other media) full of distribution files. Makefile.${ARCH} Arch-dependent Makefile fragment, included automatically. ${DISTDIR} cache of all distribution files. distinfo Checksum file. Holds the output of cksum(1) with the algo- rithms rmd160, tiger, sha1, sha256 and md5 for the ports ${DISTFILES} and ${PATCHFILES}. ${FULLDISTDIR}/${ALLFILES} cache of distribution files for a given port. ${PKGDIR}/DESCR Description for the port. Variables such as ${HOMEPAGE} and ${MAINTAINER} will be expanded (see SUBST_VARS). Multi-package ports will use DESCR${SUBPACKAGE}. ${PORTSDIR}/infrastructure/db/fake.mtree Specification used for populating ${WRKINST} at the start of fake. Use pre-fake if this is incomplete. ${PORTSDIR}/Packages/CDROM Default setup of ${CDROM_PACKAGES}. ${PORTSDIR}/Packages/FTP Default setup of ${FTP_PACKAGES}. ${PORTSDIR}/Packages Default setup of ${PKGREPOSITORY}. ${PORTSDIR}/Bulk Default setup of ${BULK_COOKIES_DIR}. ${PORTSDIR}/Mystuff Extra directory used to store local ports before committing them. All depend targets will normally look there after the normal lookup fails. See PORTSDIR_PATH. systrace.filter List of additional port specific filters, included automati- cally.
cdrom-packages, ftp-packages Links are now created during the package target. depends-list Renamed into full-build-depends {pre,do}-extract Don't override. Set EXTRACT_ONLY to nothing and override post-extract instead. fetch-all, fetch-list, mirror-distfiles See mirroring-ports(7) for more efficient and flexible ways to build mirrors. obj Starting with OpenBSD 3.3, using WRKOBJDIR no longer creates a symlink between the current directory and a sub- directory of ${WRKOBJDIR}, so obj is no longer applicable. print-depends Use print-build-depends and print-run-depends instead. print-depends-list Renamed into print-build-depends print-package-depends Renamed into print-run-depends
CATn List of formatted manpages, per section. CATPREFIX Location for storing formatted manpages. Derived directly from PREFIX. COMMENT Used to be the name of the comment file for a package. It now holds the comment itself. Some magic has been put in to allow for a seamless transition. DESCR_SRC From NetBSD. This is DESCR. OpenBSD does not give a specif- ic name to the generated file. It is not recommended to try to access them directly. EXTRACT_AFTER_ARGS Was used to cobble together the normal extraction command, as ${EXTRACT_CMD} ${EXTRACT_BEFORE_ARGS} ${EXTRACT_AFTER_ARGS}. Use EXTRACT_CASES instead. EXTRACT_BEFORE_ARGS Likewise, use EXTRACT_CASES instead. EXTRACT_CMD Likewise, use EXTRACT_CASES instead. FETCH_BEFORE_ARGS, FETCH_AFTER_ARGS Set FETCH_CMD to point to a script that does any required special treatment instead. FETCH_DEPENDS Used to specify dependencies that were needed to fetch files. It is much easier to mirror locally weird distribu- tion files. GNU_CONFIGURE Use CONFIGURE_STYLE instead. HAS_CONFIGURE Use CONFIGURE_STYLE instead. HAVE_MOTIF Old user settings. No longer needed since OpenMotif is now free. MANn List of unformatted manpages, per section. MANPREFIX Location for storing unformatted manpages. Derived directly from PREFIX. MASTERDIR From FreeBSD. Used to organize a collection of ports that share most files. OpenBSD uses a single port with flavours or multi-packages to produce package variations instead. MASTER_SITE_SUBDIR Contents were used to replace '%SUBDIR%' in all MASTER_SITES variables. Since '%SUBDIR%' almost always oc- cur at the end of the directory, the simpler ${VARIABLE:=subdir/} construct is now used instead (taken from NetBSD). MD5_FILE Use CHECKSUM_FILE instead. MIRROR_DISTFILE Use PERMIT_DISTFILES_FTP and PERMIT_DISTFILES_CDROM to determine which files can be mirrored instead. See mirroring-ports(7). NEED_VERSION Used to set a requirement on a specific revision of bsd.port.mk needed by a port. No longer needed as bsd.port.mk should always be kept up-to-date. NO_CONFIGURE If ${CONFIGURE_SCRIPT} does not exist, no automatic confi- guration will be done anyway. NO_DESCRIBE All ports should generate a description. NO_EXTRACT Set EXTRACT_ONLY= instead. NO_INSTALL_MANPAGES Use CONFIGURE_STYLE instead. NO_MTREE Starting with OpenBSD 2.7, the operating system installa- tion script runs the /usr/local specification globally, in- stead of embedding it in each package. So packages no longer record an mtree(8) specification. Use an explicit '@exec' command if needed. NO_PACKAGE All ports should generate a package, preferably before in- stall. NO_PATCH The absence of a patches directory does the same. Use PATCHDIR and PATCH_LIST if patches need to be changed dynamically. NO_WRKDIR All ports should have a working directory, as this is necessary to store cookies and keep state. NO_WRKSUBDIR The same functionality is obtained by setting WRKDIST=${WRKDIR} . NOCLEANDEPENDS Use CLEANDEPENDS instead. NOMANCOMPRESS FreeBSD ships with compressed man pages, and uses this variable to control that behavior. OBJMACHINE Starting with OpenBSD 3.3, setting WRKOBJDIR creates the whole WRKDIR hierarchy under ${WRKOBJDIR}, so OBJMACHINE is no longer useful. PACKAGES Base location for packages built, renamed PKGREPOSITORYBASE. PATCH_SITES PATCHFILES used to be retrieved from a separate site list. For greater flexibility, all files are now retrieved from MASTER_SITES, MASTER_SITES0, ..., MASTER_SITES9, using a ':0' to ':9' extension to the file name, e.g., PATCH_FILES=foo.diff.gz PATCH_SITES= becomes PATCH_FILES=foo.diff.gz:0 MASTER_SITES0= PERMIT_{DISTFILES,PACKAGE}_{CDROM,FTP} Set to 'Yes' if package or distribution files can be al- lowed on ftp sites or cdrom without legal issues. Set to reason not to otherwise. PERMIT_* lines in the Makefile should be preceded with a comment explaining details about licensing and patents issues the port may have. Porters must be very thorough in their checks. In case of doubt, ask. PLIST_SRC From NetBSD. This is PLIST. OpenBSD does not give a specif- ic name to the generated file. It is not recommended to try to access them directly. PKGNAME Used to refer to the full package name, has been superseded by FULLPKGNAME-foo, for SUBPACKAGE -foo . PKGNAME now holds the package name, not taking multi-packages or flavours into account. Most ports are not concerned by this change. PLIST_SUBST From NetBSD and FreeBSD. Use SUBST_VARS instead. OpenBSD does not allow general substitutions of the form VAR=value, but uses only a list of variables instead. Most package files gets transformed, instead of only the packing list. RESTRICTED Port has cryptographic issues. OpenBSD focuses on PERMIT_PACKAGE_{FTP,CDROM} instead. SEPARATE_BUILD On OpenBSD this could be used to have different WRKBUILD and WRKSRC, in the hope the port didn't write to its source directory. While this way to build is recommended by the GCC SC, it has proven to be the source of much trouble, so it got removed in the MirPorts Framework. USE_AUTOCONF Use CONFIGURE_STYLE instead. USE_BZIP2 The framework will automatically detect the presence of .tar.bz2 files to extract. USE_COMPILER Choose which compiler to use for the system. Possible values are: system Default: use the system compiler mgcc, pgcc, or cc, and pull _DEFCOPTS into the default CFLAGS. gcc4.4 Build-depend on the GCC 4.4 port, lib-depend on its RTL, and pull _DEFCOPTS into the default CFLAGS. pcc Override CC to pcc and CXX to false and pull _DEFCOPTS_pcc into the default CFLAGS. USE_IMAKE Use CONFIGURE_STYLE instead. USE_LZMA The framework will automatically detect the presence of .tar.lzma files to extract. USE_XZ The framework will automatically detect the presence of .tar.xz files to extract. USE_ZIP The framework will automatically detect the presence of .zip files to extract. VARNAME Use mmake show=name instead of mmake show VARNAME=name.
${FILESDIR}/md5 Renamed to distinfo to match other BSD, and save directories. ${SCRIPTDIR}/{pre,do,post}-* Identical functionality can be obtained through a {pre,do,post}-* target, invoking the script manually if neces- sary. ${PKGDIR}/PLIST.noshared Use PFRAG.shared or PFRAG.no-shared instead. PLIST.noshared was too easy to forget when updating ports. ${PKGDIR}/PLIST.sed Use PLIST directly. Until revision 1.295, bsd.port.mk did not substitute variables in the packing list unless this special form was used. /usr/share/mk/bsd.port.mk Original location of bsd.port.mk. The current file lives under ${PORTSDIR}/infrastructure/mk/bsd.port.mk, whereas /usr/share/mk/bsd.port.mk is just a stub. {scripts,files,patches}.${OPSYS} The OpenBSD ports tree focuses on robustness, not on being portable to other operating systems. In any case, portability should not need to depend on operating system dependent patches. /usr/local/etc Used by FreeBSD to marshall system configuration files. All OpenBSD system configuration files are located in /etc, or in a subdirectory of /etc.
The fake target is used to install the port in a private directory first, ready for packaging by the package target, so that the real installation will use the package. Essentially, fake invokes a real install process after tweaking a few variables. fake first creates a skeleton tree under ${WRKINST}, using the mtree(8) specification ${PORTSDIR}/infrastructure/db/fake.mtree. A pre-fake target may be used to complete that skeleton tree. For in- stance, a few ports may need supplementary stuff to be present (as it would be installed if the ports' dependencies were present). If {pre,do,post}-install overrides are present, they are used with some important changes: PREFIX is set to ${WRKINST}${PREFIX}, ${DESTDIRNAME} is set to ${WRKINST}, and TRUEPREFIX is set to ${PREFIX}. Essentially, old install targets work transparently, except for a need to change PRE- FIX to TRUEPREFIX for symbolic links and similar path lookups. Specific traditional post install work can be simply removed, as it will be taken care of by the package itself (for instance, ldconfig, or texinfo's install-info). If no do-install override is present, the port is installed using env -i ${MAKE_ENV} PREFIX=${WRKINST}${PREFIX} ${DESTDIRNAME}=${WRKINST} TRUEPREFIX=${PREFIX} ${MAKE_PROGRAM} ${FAKE_FLAGS} -f ${MAKE_FILE} ${FAKE_TARGET} Note that this does set both PREFIX and ${DESTDIRNAME}. If a port's Makefile both heeds ${DESTDIRNAME}, and references PREFIX explicitly, FAKE_FLAGS may rectify the problem by setting PREFIX=${PREFIX} (which will do the right thing, since ${PREFIX} is a make(1) construct which will not be seen by the shell). ${FAKE_FLAGS} is used to set variables on make(1) command line, which will override the port Makefile contents. Thus, a port that mentions DESTDIR= does not need any patch to work with fake.
Starting with OpenBSD 2.7, each port can generate several packages through two orthogonal mechanisms: FLAVOURS and MULTI_PACKAGES. If a port can be compiled with several options, set FLAVOURS to the list of possible options in the Makefile. When building the port, set FLAVOUR='option1 option2...' to build a specific flavour of the port. The Makefile should test the value of FLAVOUR as follows: FLAVOUR?= .if ${FLAVOUR:L:Moption1} # what to do if option1 .endif .if ${FLAVOUR:L:Moption2} # what to do if option2 .endif bsd.port.mk takes care of a few details, such as generating a distinct work directory for each flavour, or adding a dash separated list of op- tions to the package name. The order in which FLAVOUR is specified does not matter: the generated list, called the canonical package extension, matches the ordering of FLAVOURS. Also, it is an error to specify an op- tion in FLAVOUR that does not appear in FLAVOURS. In recursive package building, flavours can be specified as a comma separated list after the package directory, e.g., SUBDIR+=vim,no_x11. Finally, packing information will use templates with the canonical pack- age extension if they are available: if FLAVOUR='option1 option2' and both COMMENT and COMMENT-option1-option2 are available, COMMENT-option1-option2 will be used. If a port can generate several useful packages, set MULTI_PACKAGES ac- cordingly. Each extension of a MULTI_PACKAGES name should start with a dash, so that they cannot be confused with FLAVOURS. In dependency check- ing and recursive builds, a subpackage can be specified after a comma, e.g., SUBDIR+=quake,-server. MULTI_PACKAGES only affects the actual pack- age building step (and the describe step, since a MULTI_PACKAGES port will produce several descriptions). If MULTI_PACKAGES is set, each element of MULTI_PACKAGES triggers a re- cursive mmake package, with SUBPACKAGE set to the right value, and PACK- AGING defined. For instance, if MULTI_PACKAGES=-lib -server, mmake pack- age will work as follows: • Run env SUBPACKAGE= PACKAGING= mmake package, • Run env SUBPACKAGE=-lib PACKAGING=-lib mmake package, • Run env SUBPACKAGE=-server PACKAGING=-server mmake package, The port's Makefile can test the value of SUBPACKAGE to specialize pro- cessing for all sub packages. Note that SUBPACKAGE can also be set for dependency checking, so be careful to also test PACKAGING: the build stage is shared among all subpackages, and tests often only make sense during the packaging stage. All packing information is derived from tem- plates with SUBPACKAGE appended. In the preceding example, the packing- list template for pkgname-foo must be in PLIST-foo.
Starting after OpenBSD 2.7 (around revision 1.300 of bsd.port.mk), all packing information is generated from templates in ${PKGDIR}. • If not overridden by the user, determine which set of templates to use, depending on the current SUBPACKAGE and FLAVOUR information. Set ${PLIST}, ${DESCR}, ${COMMENT}, ${MESSAGE} accordingly. • Detect the existence of ${PKGDIR}/{REQ,INSTALL,DEINSTALL}${SUBPACKAGE}. Modify PKG_ARGS ac- cordingly, to use the generated files, and add dependencies to regen- erate the files if the templates change. • Generate the actual DESCR, and if needed, MESSAGE, REQ, INSTALL, DEINSTALL from the templates in ${DESCR}, ${MESSAGE}, ${PKGDIR}/REQ${SUBPACKAGE}, ${PKGDIR}/INSTALL${SUBPACKAGE}, ${PKGDIR}/DEINSTALL${SUBPACKAGE}, by substituting the variables in ${SUBST_VARS}, and by substituting ${FLAVOURS} with the canonical flavour extension for this port, e.g., if FLAVOURS=no_map gfx qt2, if FLAVOUR=gfx no_map, this is '-no_map-gfx'. • Generate the actual PLIST from the template ${PLIST}, by inserting shared/no-shared fragments, applying a possible user-supplied pipe- line, merging other fragments, applying the same variable substitu- tions as other packing information, and finally handling dynamic li- braries macros. Note that ${COMMENT} is currently not substituted, to speed up describe generation. To avoid substitution, variables can be escaped as follows: $\{PREFIX} Constructs such as the line %%SHARED%% or !%%SHARED%% in the packing-list template trigger the inclusion of the ${PKGDIR}/PFRAG.shared${SUBPACKAGE} or ${PKGDIR}/PFRAG.no-shared${SUBPACKAGE}. Similarly, if FLAVOURS lists flav1, then the line %%flav1%% (resp. !%%flav1%%) will trigger the inclusion of ${PKGDIR}/PFRAG.flav1${SUBPACKAGE} (resp. ${PKGDIR}/PFRAG.no-flav1${SUBPACKAGE}) in the packing-list. Fragments that cannot be handled by these simple rules can always be specified in a custom SED_PLIST. The constructs DYNLIBDIR(directory) and NEWDYNLIBDIR(directory) should be used in ${PKGDIR}/PFRAG.shared${SUBPACKAGE} to register directories that hold dynamic libraries (see ldconfig(8)). NEWDYNLIBDIR is meant for directories that will go away when the package is deleted. If possible, it should not be used, because users also have to edit rc.conf to add the directory. It is usually better to also let libraries be visible as a link under ${LOCALBASE}. Having a separate directory is enough to trick ld(1) into grabbing the right version. Note that libraries used only for dlopen(3) do not need NEWDYNLIBDIR. The special update-plist target does a fairly good job of automatically generating PLIST and PFRAG.shared fragments. In MULTI_PACKAGES mode, there must be separate COMMENT, DESCR, and PLIST templates for each SUBPACKAGE (and optional distinct MESSAGE, REQ, IN- STALL, DEINSTALL files in a similar way). This contrasts with the FLA- VOURS situation, where all these files will automatically default to the non-flavour version if there is no flavour-specific file around.
ftp(1), make(1) or mmake(1), pkg_add(1), library-specs(7), packages-specs(7)
The ports mechanism originally came from FreeBSD. A lot of additions were taken from NetBSD over the years. When the file grew too large, it was cleaned up to restore some of its speed and remove a lot of bugs. FLAVOURS, MULTI_PACKAGES and FAKE are OpenBSD improvements. MirLibtool and the autogen configure style are MirPorts additions. MirOS BSD #10-current May 29, 2014. | http://www.mirbsd.org/htman/i386/man5/bsd.port.mk.htm | CC-MAIN-2014-52 | refinedweb | 9,570 | 60.11 |
I am setting up a new file server, FP02. It is Server 2012 Standard. I am configuring DFS namespaces for the new shares, so that they will use the \\domain\share UNC.
My old Server 2008 R2 file server, FP01 did not use DFS for file shares.
On FP01 I have a share called "Users". It is for all the AD users' my documents redirect.
I created a new DFS namespace on FP02 and pointed the local path for the shared folder on the local (new) server.
Next I chose All users have read-only permissions (default choice).
I started to copy the contents of the FP01 users folder to the FP02 users folder and about a minute later, I started getting access denied errors.
Next I started getting calls from end users saying that they could not access their My documents folder (redirected). It took me a minute to realize that this was the result of the "all users have read-only permissions" but they should all be accessing the old server like before.
I reset the share permissions on \\FP01\Users to full control for Everyone and that only fixed part of the problem; For users that had their folders copied over before access denied messages, they could access from the original, still supposedly valid path of \\fp01\users. The other people that had not had their folders copied were not able to even see their folder on \\fp01\users. If I went to \\fp01\d$\data\users, all the folders were there. But if I went to \\fp01\users only the ones that had been copied were there.
If you check the attached screen shot, it shows the properties from \\fp01\users\Carlos but points to the referral list of \\fp02\users and it is active.
What happened? Why is it even associating the FP01 server with the DFS namespace I created with a \\domain\share convention?
By the way, I have not setup any DFS replication.
20 Replies
Jul 1, 2013 at 6:31 UTC
One other thing I have noticed is that creating a folder in any of the following will be synced with the other two counter parts.
\\fp01\users
\\fp02\users
\\domain\users
Also, if I delete the folder from \\fp01\users\, the folder is also deleted from the other two locations. Now I also need to know how to keep these separate because my goal is a complete migration so that I can remove the file share from FP01 and exclusively use the \\domain\users which is actually located at \\fp02\users?
Jul 1, 2013 at 6:37 UTC
A long shot, but is your domain also fp01? I've seen this happen when someone named their only server the same name as their domain: acme.local domain, server named acme. That caused ALL kinds of problems when they started adding additional servers.
Jul 1, 2013 at 6:39 UTC
Fortunately it is not FP01. Let's just say it is ABC.local.
Jul 1, 2013 at 7:07 UTC
A few additional questions:
Are FP01 & 02 DCs?
Is this AD integrated DFS? It looks like you might have created a stand-alone namespace. . .
Did you install DFS on FP01 too?
Did you create a separate new subfolder to host the namespace?
Can you post screenshots of your DFS Management pages?
Jul 1, 2013 at 7:23 UTC
FP01 is a DC.
FP02 is not.
I am pretty sure it is AD integrated but I created a month ago. It shows \\abc.local\users under the namespaces in the far left pane. How would I be able to tell?
DFS is not showing in administrative tools on FP01
On FP02 under the namespace tab, it has 0 entries
Jul 1, 2013 at 7:23 UTC
And here is the namespace servers tab
Jul 1, 2013 at 7:49 UTC
OK. It looks like you created individual namespaces for each target you wanted to share and used the existing share as the DFS root.. This can work (maybe?) but I've always done it differently. Its been a while, but I believe best practice is to use a common namespace, and then create separate targeted shares underneath that.
So, I would recommend creating a new namespace with a unique share name, like DFS or Targets or Shares, whatever. It will see that there is no shared folder already and will create a new folder in your DFSroots folder.
Once the namespace is created, then you can add each individual folder you want shared/replicated.
You would end up with:
\\abc.local\dfs\users
\\abc.local\dfs\JacquesStor..
\\abc.local\dfs\CompanyPh..
instead of:
\\abc.local\users
\\abc.local\JacquesStor..
\\abc.local\CompanyPh..
Jul 1, 2013 at 8:28 UTC
Oh. OK. Interesting. Let me try that. Thanks for the help.
Jul 1, 2013 at 9:16 UTC
Well, I tried to create a new DFS namespace. This time I named it DFSData and selected D:\DFSData on the local server (FP02). It gave me an error "the namespace server cannot be added the specified path is invalid".
Then I tried a couple of different paths and still got the same error. I also tried adding the DFSData folder in advance and pointed to it. Still no dice. Not sure what is going on...
Jul 1, 2013 at 9:18 UTC
I just checked the shares on the server and all the ones I tried creating the namespace for are showing as shared folders with the corresponding path. Strange.
Jul 1, 2013 at 9:25 UTC
I stopped sharing for all the newish dfs shares and went into DFS management and the Users namespace was gone. So I created a new one and pointed it to the users folder D:\Data\Users and it completed the task successfully. Hmmm...
Jul 1, 2013 at 9:33 UTC
Stranger still - The changes made to the FP01 are still replicated to FP02 and vice versa.
Jul 1, 2013 at 9:47 UTC
I deleted the namespace for users and i can still access the \\domain\users share and the folders are still in sync; everything is mirrored on the other, both ways.
Jul 1, 2013 at 10:22 UTC
When I go to \\abc.local\ - I can see all my shares on FP01, including printers. I don't know where to go from here.
Jul 2, 2013 at 4:09 UTC
When I go to \\abc.local\ - I can see all my shares on FP01, including printers. I don't know where to go from here.
This is normal. When you browse the DFS root (\\abc.local), you will be connected to the first responding namespace member, and it will show all of its shares, whether they are part of DFS or not. Note that you will also see all of the original shares too. When you actually want to connect to one of the DFS shares, you need to make sure you use the \\abc.local\dfs\share unc path, not \\abc.local\share. You also want to make sure you aren't using the dfs root for anything but accessing DFS shares since mapping a printer or any non dfs share path will break if you don't connect to the same DFS namespace server the next time you connect.
As a side note, you often need to wait 15 to 30 mins (my experience) when making changes to DFS due to AD replication times. Once you delete the old DFS shares and namespace serevrs, wait a good bit of time, then setup the new ones.
Jul 2, 2013 at 4:12 UTC
Thanks. I guess the DC, despite not being a true namespace member, will publish all the shares at the domain root (\\abc.local).
Jul 2, 2013 at 4:43 UTC
Thanks. I guess the DC, despite not being a true namespace member, will publish all the shares at the domain root (\\abc.local).
Yeah, I believe all it takes for a DC to respond to the DFSroot (\\abc.local) is for it to have DFS installed, whether its part of a namespace or not. That's why you want to use abc.local\[namespace]\[share] instead of \\abc.local\[share] Because if the shares exist on the DC, outslde of DFS, it will share them.
Jul 2, 2013 at 4:49 UTC
I see. But in this case the DFS role is not installed on FP01 (the DC); it does have the services running because it uses it for the sysvol and netlogon, so I guess that is all it takes..
Jul 2, 2013 at 5:36 UTC.
It's empty. Its just a folder to hold virtual links.
Jul 9, 2013 at 4:41 UTC
So I still have some remnants of my misconfiguration of DFS on the FP01 (domain controller) that doesn't have the DFS role installed.
I have done everything shy of removing the DFS role from the actual namespace server, FP02. I have deleted all the namespaces and added one to the C:\DFSRoot\DFS directory and then added a new folder that only affects me.
FP01, however, still has the share name \\FP01\Users and its directory's properties contains the DFS tab that points it to \\FP02\Users.
I ran "dfsutil.exe /server:fp01 /view" from the command line on FP01 and it shows me
"Roots on machine fp01
\FP01.abc.local\Users
Done with Roots on machine fp01
Done processing this command."
I need a second opinion on what the plan of attack should be to remove this dfs root while being the least disruptive to the end users. My goal is to use \\abc.local\users that links to \\fp02\users and remove the fp01 share to free up space on that server.
A secondary issue is that browsing to the \\abc.local\ share via UNC takes several seconds and sometimes only shows the sysvol and netlogon shares and not showing any other shares. I have added the DfsDnsConfig registry key to FP02 and both my domain controllers and restarted the DFS namespace services on all but it still takes longer than just pathing to the servers directly.
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion. | https://community.spiceworks.com/topic/354331-dfs-server-2012-migrating-basic-share-from-server-2008-r2 | CC-MAIN-2017-47 | refinedweb | 1,728 | 81.43 |
Console private input (passwords)
Hi everyone,
I'm creating a console app that will input private information like user passwords. The problem is, the console itself seems to handle its own history and echo and overall seems detached from the core application so i don't know how to go about controlling this behavior so user passwords would show up in history up/down arrows and echo the password in plain text to the screen.
what exactly is happening when we use:
CONFIG += console
I'm assuming it calls the platform specific terminal emulator, question is how do i go about controlling it? what is the best approach to handling private input in QT console?
- SGaist Lifetime Qt Champion last edited by
This post is deleted!
@SGaist said in Console private input (passwords):
Hi,
See the getpass method from the GNU libc manual.
Thanks for the the reply but that seems to only be available in Linux. even the alternative code in that same page requires termios.h which is also only available in Linux. Sorry i forgot mention that i'm targeting both windows and Linux so a cross platform solution is desired.
I'll give this code a shot:
I'll post again if this works out.
yup works like a charm, didn't really make any huge changes different from the code posted in that stack overflow Q/A but here's how i have it currently and it works perfectly:
void enableEcho(bool enable) { #ifdef Q_OS_WIN32 HANDLE stdinHandle = GetStdHandle(STD_INPUT_HANDLE); DWORD mode; GetConsoleMode(stdinHandle, &mode); if (!enable) { mode &= ~ENABLE_ECHO_INPUT; } else { mode |= ENABLE_ECHO_INPUT; } SetConsoleMode(stdinHandle, mode); #else struct termios tty; tcgetattr(STDIN_FILENO, &tty); if (!enable) { tty.c_lflag &= ~ECHO; } else { tty.c_lflag |= ECHO; } (void) tcsetattr(STDIN_FILENO, TCSANOW, &tty); #endif } | https://forum.qt.io/topic/90262/console-private-input-passwords/? | CC-MAIN-2022-40 | refinedweb | 291 | 53.1 |
What is BeautifulSoup?
BeautifulSoup is a Python library from
What can it do
On their website they write "Beautiful Soup parses anything you give it, and does the tree traversal stuff for you. You can tell it to: "Find all the links" "Find all the links of class externalLink" "Find all the links whose urls match "foo.com" "Find the table heading that's got bold text, then give me that text.""
BeautifulSoup Example
In this example, we will try and find a link (a tag) in a webpage. Before we start, we have to import two modules. (BeutifulSoup and urllib2). Urlib2 is used to open the URL we want. We will use the soup.findAll method to search through the soup object to match fortext and html tags within the page.
from BeautifulSoup import BeautifulSoup import urllib2 url = urllib2.urlopen("") content = url.read() soup = BeautifulSoup(content) links = soup.findAll("a")
Output
That will print out all the elements in python.org with an "a" tag. (The "a" tag defines a hyperlink, which is used to link from one page to another.)
BeautifulSoup Example 2
To make it a bit more useful, we can specify the URL's we want to return.
from BeautifulSoup import BeautifulSoup import urllib2 import re url = urllib2.urlopen("") content = url.read() soup = BeautifulSoup(content) for a in soup.findAll('a',href=True): if re.findall('python', a['href']): print "Found the URL:", a['href']
Further Reading
I recommend that you head over to to read more about what you can do with this awesome module.
Recommended Python Training
For Python training, our top recommendation is DataCamp. | https://www.pythonforbeginners.com/beautifulsoup/python-beautifulsoup-basic | CC-MAIN-2021-31 | refinedweb | 271 | 68.87 |
How to load the user interface:
import ImageStore
ImageStore.UI()
Description:
Use this to store and read information within PNG images. It will keep the format of whatever you put in, so everything is pretty much comptaible. Files can also be stored, so it's easily possible to encode an mp3 file (or private image) into an image and keep it intact for later.
Features:
- Write any data (text, list, dictionary, file, etc) to an image
- Automatically detects best method to store the information in an image, if image is too small it'll switch to the default mode and create a new image
- Read data from any of these images
- Automatically upload image on imgur
- Read data from image URL
- Cache small amount of image data (Using cache for a 1080p image means up to 130,000,000 less calculations)
Known problems:
- Using backslashes in the filepath can convert the characters and change the path.
- A clear line is sometimes noticeable where the stored data finishes (don't worry about this, you'll know what I mean if you see it)
- The user interface progress bar doesn't exactly match what's going on, there's a delay.
Future improvements:
- Replicate older data so the end of data isn't so obvious
- Option to split data across multiple images
- Encrypt data so only people with the correct code can access the information
Please use the Feature Requests to give me ideas.
Please use the Support Forum if you have any questions or problems.
Please rate and review in the Review section. | https://www.highend3d.com/maya/script/store-information-as-image-for-maya | CC-MAIN-2021-17 | refinedweb | 262 | 51.62 |
Back to Suggestions - program options library
program_option's exceptions should be grouped under one struct. Currently, we have, e.g.:
class validation_error : public error { ... };
Instead, it should be
class program_options_exception : public error { ... }; class validation_error : public program_options_exception { ... };
Then, for example, you could have:
try { ... } catch (exception& e) { if (dynamic_cast<program_options_exception*>(&e)) cout << "Usage: " << ...; }
to print a usage message only in the case of a program_options-generated exception.
- People/Chuck Messenger
Actually, "error" is "boost::progam_options::error" --- i.e. it's the class from which all program options exceptions are derived. Is that already what you propose?
- People/Vladimir Prus
Oh -- so you've got your own "error"? Yes, that satisfies what I wanted.
But, the symbol "error" doesn't seem wise. I had assumed (probably wrongly) that "error" was in the std namespace. (Maybe I was thinking of "exception"). Even so, the word "error" is much too common to be used in program_option's top level.
What happens when a user does "using namespace program_options;" (which is my typical way of using boost libs)?
You might counter that you can always specify program_options::error. But I feel the genericness of the name is problematical. What if a user doesn't qualify "error"? Symbols in a library's toplevel namespace should, I feel, be reasonably specific to the library's domain.
I suggest changing "error" to "program_options_error".
- People/Chuck Messenger
I somehow don't like boost::program_options::program_options_error. I'm -0 on this proposal -- whatever reviewers in general will decide, will do.
- People/Vladimir Prus | http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Unified_Exception_Class_-_Program_Options_Suggestion | CC-MAIN-2017-13 | refinedweb | 253 | 53.68 |
This extension is to allow users of the CListView and CGridView Zii components to take advantage of the Ajax features of them, whilst maintaining good practice with regards to browser navigation - enabling the browser back button and enabling the address bar url to accurately represent the current page state (which will then be replicated on re-opening the page).
Basically it uses JQuery and the BBQ JQuery plugin to maintain a hash history of specific Ajax actions.
For example, clicking on the ajax page 2 of a list on the page "" will append something like "#page=2" to the url, giving "", as well as loading the second page through ajax, as usual. And if you then copy this link and paste it somewhere else, you will see that your page will automatically navigate to the second page of the list.
Having just upgraded to Yii 1.1.9, I am aware that the core Yii gridview javascript has been completely revamped (this may be true for listview too), and as such, this extension will not yet work with version 1.1.9. I am working on an updated version (that will also have to completely break with past versions) and will upload it when it is ready.
Yii 1.1.7 (this started out using 1.1.5, but has been upgraded every time to take into account differences in the PHP classes and JS files). It shouldn't be too complicated to downgrade it if required (changes to the original Yii version are clearly marked), but I will not personally be doing this.
Simply change the class where you usually put your zii.widgets.CListView or zii.widgets.grid.CGridView widgets to ext.RRListView or ext.RRGridView (assuming that ext is defined as the alias for your extensions path and that you have extracted the class files directly into your extensions path) and set your parameters as usual (all the usual CListView/CGridView options should work as normal). e.g.
$this->widget('ext.RRGridView', array( 'dataProvider' => $dataProvider, 'columns' => array( 'id', array( 'class' => 'CButtonColumn', ), ), ));
This should work right out of the box, with hash history enabled by default.
There are obviously several options:
enableJavascript is set to true by default, and simply allows you to not include all the usual javascript files if you don't want to use any javascript features;
extensionsAssets is not set by default, but allows you to explicitly set the location of the extension's assets if needed (if you don't put them in the default ext.assets for example);
hashHistory is set to true by default and determines whether hash history will be enabled. If set to false, we fall back on to the default (Zii) ajax handling for CListView and CGridView;
updaters is an array of updaters that should trigger ajax changes (to be reflected in the hash history). Each item should consist of 'selector' => JQuery selector for the elements triggering the change, and 'getParams' => array('a', 'b', 'c'), an array of the GET parameters to be taken into account from the hrefs of the selected anchor elements (NB in the case of input elements this attribute should not be set - the params will be determined from the input names). The array is populated by default on initialisation with page and sort elements if pagination and sorting are enabled, and their selectors are generated automatically in the javascript if not overridden in these options.
Below is an example of all these options in (mutually exclusive) action:
$this->widget('ext.RRGridView', array( 'dataProvider' => $dataProvider, 'enableJavascript' => false, 'extensionAssets' => Yii::app()->getAssetManager()->publish(Yii::getPathOfAlias('ext.otherfolder.assets')), 'hashHistory' => false, 'updaters' => array( 'sort' => array( 'selector' => '#gridId thead th a', 'getParams' => array('mySortVar'), ), 'advancedSearch' => array( 'selector' => '#advanced-search form input, #advanced-search form select', ), ), 'columns' => array( 'id', array( 'class' => 'CButtonColumn', ), ), ));
I have tested these extensions slightly, but I'm afraid that I have not tested them extensively. I don't know whether they work in all circumstances (I have tried to code for all circumstances) and all browsers (I hate playing around in IE). However, if you do find bugs and let me know about them (preferably with relevant details like error messages from Firebug etc, or even pinpointing the location), then I will fix them if possible, and depending on my workload.
Certain things should be taken into account:
I originally made these extensions for me - therefore they reflect my view of how things should work, which may not be your view;
I have tried to make as few modifications as possible and couple as loosely as possible to the original Zii versions of these widgets, so that future changes in them should, hopefully, not impact this extension;
The javascript file used for implementing hash history in the browser overrides the standard jquery.yiilistview.js (and GridView) inclusion and therefore the $.fn.yiiListView namespace. Currently it includes the Zii version, backs up its attributes, overrides the initialisation method then restores the original object's methods plus some helpers, so this should only be a problem if the original Zii javascript initialisation method changes;
The logic for monitoring hash changes is kept in a separate javascript file (the ListViews simply register the parameters to be monitored and callback functions with this script) - this means that the actual hash monitoring logic may be substituted (the original file probably needs a good rewrite anyway), and also that you may register your own parameters to be monitored if you want to dig into the file;
my philosophy with regards to this hash history is not necessarily standard - I have taken the view that the hash parameters should reflect and integrate the query parameters. This means that when calculating the ajax request for the address '', I take into account the sort parameter in the querystring (unless it is overridden in the hash) and merge it with the page parameter in the hash, so send a request for the second page of the id sort. I therefore use the window.location url as the basis for my ajax request rather than the keys div generated by the Zii widget. This means two things - first of all, all widgets on the page must use the current page location as their route for ajax updates; and widgets cannot use GET parameters with the same name. This is something that could be overcome by ignoring querystring parameters in the current location, using the keys div location instead of the current location and prepending the widget id to each of its GET parameters. This is something that I might consider implementing as an option at a future date.
One thing that has been asked before, and I think is a good example for this extension, is how to include the default Gii advanced search (on the admin view) in the ajax updating. This is very simple with this extension - you just need to assign an id to the advanced search container, add this in the options of the widget as a container to be updated, and add a JQuery selector for it, like so:
$searchId = 'advanced-search'; echo CHtml::link('Advanced Search', '#', array('class' => 'search-button')); <div id="<?php echo $searchId; ?>" class="search-form" style="display: none"> <?php $this->renderPartial('_search',array( 'model' => $model, )); </div><!-- .search-form --> $this->widget('ext.RRGridView', array( 'dataProvider' => $model->search, 'filter' => $model, 'ajaxUpdate' => $searchId, 'updaters' => array( 'advancedSearch' => array( 'selector' => '#'.$searchId.' form input, #'.$searchId.' form select', ), ), 'columns' => array( 'id', array( 'class' => 'CButtonColumn', ), ), ));
I would also suggest that you remove the style="display: none" from the Html and add some javascript on page load to hide the search div (and also the submit button) which then allows for graceful non-javascript degradation, and also prevents the problem of the search form being reinserted as hidden with every ajax update.
The forum post that this extension started out as can be found here, and any questions or bug reports should be posted there.
Total 5 comments
@rokam
Thanks!
It is impossible to open the grid directly with the current hash tag (on page load I mean), since the hash part of a url is never sent to the server, so the server cannot know what part of the grid to load, that is why the default page is loaded, then ajax kicks in on the client side to load the appropriate grid depending on the hash tag.
I think this feature should be incorporated to next release.
I do only think that the grid should open with the current hashtag and not open with default and then load the hashtag.
Really great! :D
Please use the formum post for this. I only use pathFormat and am not aware of any problem withi it.
Hi, first thanks for this extension but i noticed that it doesn't work when the urlManager urlFormat property set to "path"
Yii 1.1.8, windows 7, wamp server
I'm using this for one project, and works great. Thanks for this extension
Please login to leave your comment. | http://www.yiiframework.com/extension/rrviews/ | CC-MAIN-2018-09 | refinedweb | 1,500 | 52.02 |
AR Face Tracking Tutorial for iOS: Getting Started
In this tutorial, you’ll learn how to use AR Face Tracking to track your face using a TrueDepth camera, overlay emoji on your tracked face, and manipulate the emoji based on facial expressions you make.
Version
- Swift 4.2, iOS 12, Xcode 10
Picture this. You have just eaten the most amazing Korean BBQ you’ve ever had and it’s time to take a selfie to commemorate the occasion. You whip out your iPhone, make your best duck-face and snap what you hope will be a selfie worthy of this meal. The pic comes out good — but it’s missing something. If only you could put an emoji over your eyes to really show how much you loved the BBQ. Too bad there isn’t an app that does something similar to this. An app that utilizes AR Face Tracking would be awesome.
Good news! You get to write an app that does that!
In this tutorial, you’ll learn how to:
- Use AR Face Tracking to track your face using a TrueDepth camera.
- Overlay emoji on your tracked face.
- Manipulate the emoji based on facial expressions you make.
Are you ready? Then pucker up those lips and fire up Xcode, because here you go!
Getting Started
For this tutorial, you’ll need an iPhone with a front-facing, TrueDepth camera. At the time of writing, this means an iPhone X, but who knows what the future may bring?
You may have already downloaded the materials for this tutorial using the Download Materials link at the top or bottom of this tutorial and noticed there is no starter project. That’s not a mistake. You’re going to be writing this app — Emoji Bling — from scratch!
Launch Xcode and create a new project based on the Single View App template and name it Emoji Bling.
The first thing you should do is to give the default
ViewController a better name. Select ViewController.swift in the Project navigator on the left.
In the code that appears in the Standard editor, right-click on the name of the class,
ViewController, and select Refactor ▸ Rename from the context menu that pops up.
Change the name of the class to
EmojiBlingViewController and press Return or click the blue Rename button.
Since you’re already poking around EmojiBlingViewController.swift, go ahead and add the following import to the top:
import ARKit
You are, after all, making an augmented reality app, right?
Next, in Main.storyboard, with the top level View in the Emoji Bling View Controller selected, change the class to ARSCNView.
ARSCNView is a special view for displaying augmented reality experiences using SceneKit content. It can show the camera feed and display
SCNNodes.
After changing the top level view to be an
ARSCNView, you want to create an
IBOutlet for the view in your
EmojiBlingViewController class.
To do this, bring up the Assistant editor by clicking on the button with the interlocking rings.
This should automatically bring up the contents of EmojiBlingViewController.swift in the Assistant editor. If not, you can Option-click on it in the Project navigator to display it there.
Now, Control-drag from the
ARSCNView in the storyboard to just below the
EmojiBlingViewController class definition in EmojiBlingViewController.swift and name the outlet
sceneView.
Before you can build and run, a little bit of code is needed to display the camera feed and start tracking your face.
In EmojiBlingViewController.swift, add the following functions to the
EmojiBlingViewController class:
override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) // 1 let configuration = ARFaceTrackingConfiguration() // 2 sceneView.session.run(configuration) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) // 1 sceneView.session.pause() }
Right before the view appears, you:
- Create a configuration to track a face.
- Run the face tracking configuration using the built in
ARSessionproperty of your
ARSCNView.
Before the view disappears, you make sure to:
- Pause the AR session.
There is a teensy, tiny problem with this code so far.
ARFaceTrackingConfiguration is only available for phones with a front-facing TrueDepth camera. You need to make sure you check for this before doing anything.
In the same file, add the following to the end of the
viewDidLoad() function, which should already be present:
guard ARFaceTrackingConfiguration.isSupported else { fatalError("Face tracking is not supported on this device") }
With this in place, you check to make sure that the device supports face tracking (i.e., has a front-facing TrueDepth camera), otherwise stop. This is not a graceful way to handle this, but as this app only does face tracking, anything else would be pointless!
Before you run your app, you also need to specify a reason for needing permission to use the camera in the Info.plist.
Select Info.plist in the Project navigator and add an entry with a key of
Privacy - Camera Usage Description. It should default to type
String. For the value, type EmojiBling needs access to your camera in order to track your face.
FINALLY. It’s time to build and run this puppy… er… app… appuppy?
When you do so, you should see your beautiful, smiling face staring right back at you.
OK, enough duck-facing around. You’ve got more work to do!
Face Anchors and Geometries
You’ve already seen
ARFaceTrackingConfiguration, which is used to configure the device to track your face using the TrueDepth camera. Cool.
But what else do you need to know about face tracking?
Three very important classes you’ll soon make use of are
ARFaceAnchor,
ARFaceGeometry and
ARSCNFaceGeometry.
ARFaceAnchor inherits from
ARAnchor. If you’ve done anything with ARKit before, you know that
ARAnchors are what make it so powerful and simple. They are positions in the real world tracked by ARKit, which do not move when you move your phone.
ARFaceAnchors additionally include information about a face, such as topology and expression.
ARFaceGeometry is pretty much what it sounds like. It’s a 3D description of a face including
vertices and
textureCoordinates.
ARSCNFaceGeometry uses the data from an
ARFaceGeometry to create a
SCNGeometry, which can be used to create SceneKit nodes — basically, what you see on the screen.
OK, enough of that. Time to use some of these classes. Back to coding!
Adding a Mesh Mask
On the surface, it looks like you’ve only turned on the front-facing camera. However, what you don’t see is that your iPhone is already tracking your face. Creepy, little iPhone.
Wouldn’t it be nice to see what the iPhone is tracking? What a coincidence, because that’s exactly what you’re going to do next!
Add the following code after the closing brace for the
EmojiBlingViewController class definition:
// 1 extension EmojiBlingViewController: ARSCNViewDelegate { // 2 func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? { // 3 guard let device = sceneView.device else { return nil } // 4 let faceGeometry = ARSCNFaceGeometry(device: device) // 5 let node = SCNNode(geometry: faceGeometry) // 6 node.geometry?.firstMaterial?.fillMode = .lines // 7 return node } }
In this code you:
- Declare that
EmojiBlingViewControllerimplements the
ARSCNViewDelegateprotocol.
- Define the
renderer(_:nodeFor:)method from the protocol.
- Ensure the Metal device used for rendering is not nil.
- Create a face geometry to be rendered by the Metal device.
- Create a SceneKit node based on the face geometry.
- Set the fill mode for the node’s material to be just lines.
- Return the node.
ARSCNFaceGeometryis only available in SceneKit views rendered using Metal, which is why you needed to pass in the Metal device during its initialization. Also, this code will only compile if you’re targetting real hardware; it will not compile if you target a simulator.
Before you can run this, you need to set this class to be the
ARSCNView‘s delegate.
At the end of the
viewDidLoad() function, add:
sceneView.delegate = self
OK, time for everyone’s favorite step. Build and run that app!
Updating the Mesh Mask
Did you notice how the mesh mask is a bit… static? Sure, when you move your head around, it tracks your facial position and moves along with it, but what happens when you blink or open your mouth? Nothing.
How disappointing.
Luckily, this is easy to fix. You just need to add another
ARSCNViewDelegate method!
At the end of your
ARSCNViewDelegate extension, add the following method:
// 1 func renderer( _ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { // 2 guard let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry else { return } // 3 faceGeometry.update(from: faceAnchor.geometry) }
Here, you:
- Define the
didUpdateversion of the
renderer(_:didUpdate:for:)protocol method.
- Ensure the anchor being updated is an
ARFaceAnchorand that the node’s geometry is an
ARSCNFaceGeometry.
- Update the
ARSCNFaceGeometryusing the
ARFaceAnchor’s
ARFaceGeometry
Now, when you build and run, you should see the mesh mask form and change to match your facial expressions.
Emoji Bling
If you haven’t already done so, go ahead and download the material for this tutorial via the button at the top or bottom of the tutorial.
Inside, you’ll find a folder called SuperUsefulCode with some Swift files. Drag them to your project just below EmojiBlingViewController.swift. Select Copy items if needed, Create groups, and make sure that the Emoji Bling target is selected
StringExtension.swift includes an extension to
String that can convert a
String to a
UIImage.
EmojiNode.swift contains a subclass of
SCNNode called
EmojiNode, which can render a
String. It takes an array of
Strings and can cycle through them as desired.
Feel free to explore the two files, but a deep dive into how this code works is beyond the scope of this tutorial.
With that out of the way, it’s time to augment your nose. Not that there’s anything wrong with it. You’re already such a beautiful person. :]
At the top of your
EmojiBlingViewController class, define the following constants:
let noseOptions = ["👃", "🐽", "💧", " "]
The blank space at the end of the array is so that you have the option to clear out the nose job. Feel free to choose other nose options, if you want.
Next, add the following helper function to your
EmojiBlingViewController class:
func updateFeatures(for node: SCNNode, using anchor: ARFaceAnchor) { // 1 let child = node.childNode(withName: "nose", recursively: false) as? EmojiNode // 2 let vertices = [anchor.geometry.vertices[9]] // 3 child?.updatePosition(for: vertices) }
Here, you:
- Search
nodefor a child whose name is “nose” and is of type
EmojiNode
- Get the vertex at index 9 from the
ARFaceGeometryproperty of the
ARFaceAnchorand put it into an array.
- Use a member method of
EmojiNodeto update it’s position based on the vertex. This
updatePosition(for:)method takes an array of vertices and sets the node’s position to their center.
ARFaceGeometryhas 1220 vertices in it and index 9 is on the nose. This works, for now, but you’ll briefly read later the dangers of using these index constants and what you can do about it.
It might seem silly to have a helper function to update a single node, but you will beef up this function later and rely heavily on it.
Now you just need to add an
EmojiNode to your face node. Add the following code just before the
return statement in your
renderer(_:nodeFor:) method:
// 1 node.geometry?.firstMaterial?.transparency = 0.0 // 2 let noseNode = EmojiNode(with: noseOptions) // 3 noseNode.name = "nose" // 4 node.addChildNode(noseNode) // 5 updateFeatures(for: node, using: faceAnchor)
In this code, you:
- Hide the mesh mask by making it transparent.
- Create an
EmojiNodeusing your defined nose options.
- Name the nose node, so it can be found later.
- Add the nose node to the face node.
- Call your helper function that repositions facial features.
You’ll notice a compiler error because
faceAnchor is not defined. To fix this, change the
guard statement at the top of the same method to the following:
guard let faceAnchor = anchor as? ARFaceAnchor, let device = sceneView.device else { return nil }
There is one more thing you should do before running your app. In
renderer(_:didUpdate:for:), add a call to
updateFeatures(for:using:) just before the closing brace:
updateFeatures(for: node, using: faceAnchor)
This will ensure that, when you scrunch your face up or wiggle your nose, the emoji’s position will update along with your motions.
Now it’s time to build and run!
Changing the Bling
Now, that new nose is fine but maybe some days you feel like having a different nose?
You’re going to add code to cycle through your nose options when you tap on them.
Open Main.storyboard and find the Tap Gesture Recognizer. You can find that by opening the Object Library at the top right portion of your storyboard.
Drag this to the
ARSCNView in your View controller.
With Main.storyboard still open in the Standard editor, open EmojiBlingViewController.swift in the Assistant editor just like you did before. Now Control-drag from the Tap Gesture Recognizer to your main
EmojiBlingViewController class.
Release your mouse and add an Action named
handleTap with a type of
UITapGestureRecognizer.
Now, add the following code to your new
handleTap(_:) method:
// 1 let location = sender.location(in: sceneView) // 2 let results = sceneView.hitTest(location, options: nil) // 3 if let result = results.first, let node = result.node as? EmojiNode { // 4 node.next() }
Here, you:
- Get the location of the tap within the
sceneView.
- Perform a hit test to get a list of nodes under the tap location.
- Get the first (top) node at the tap location and make sure it’s an
EmojiNode.
- Call the
next()method to switch the
EmojiNodeto the next option in the list you used, when you created it.
It is now time. The most wonderful time. Build and run time. Do it! When you tap on your emoji nose, it changes.
More Emoji Bling
With a newfound taste for emoji bling, it’s time to add the more bling.
At the top of of your
EmojiBlingViewController class, add the following constants just below the
noseOptions constant:
let eyeOptions = ["👁", "🌕", "🌟", "🔥", "⚽️", "🔎", " "] let mouthOptions = ["👄", "👅", "❤️", " "] let hatOptions = ["🎓", "🎩", "🧢", "⛑", "👒", " "]
Once again, feel free to choose a different emoji, if you so desire.
In your
renderer(_:nodeFor:) method, just above the call to
updateFeatures(for:using:), add the rest of the child node definitions:
let leftEyeNode = EmojiNode(with: eyeOptions) leftEyeNode.name = "leftEye" leftEyeNode.rotation = SCNVector4(0, 1, 0, GLKMathDegreesToRadians(180.0)) node.addChildNode(leftEyeNode) let rightEyeNode = EmojiNode(with: eyeOptions) rightEyeNode.name = "rightEye" node.addChildNode(rightEyeNode) let mouthNode = EmojiNode(with: mouthOptions) mouthNode.name = "mouth" node.addChildNode(mouthNode) let hatNode = EmojiNode(with: hatOptions) hatNode.name = "hat" node.addChildNode(hatNode)
These facial feature nodes are just like the
noseNode you already defined. The only thing that is slightly different is the line that sets the
leftEyeNode.rotation. This causes the node to rotate 180 degrees around the y-axis. Since the
EmojiNodes are visible from both sides, this basically mirrors the emoji for the left eye.
If you were to run the code now, you would notice that all the new emojis are at the center of your face and don’t rotate along with your face. This is because the
updateFeatures(for:using:) method only updates the nose so far. Everything else is placed at the origin of the head.
You should really fix that!
At the top of the file, add the following constants just below your
hatOptions:
let features = ["nose", "leftEye", "rightEye", "mouth", "hat"] let featureIndices = [[9], [1064], [42], [24, 25], [20]]
features is an array of the node names you gave to each feature and
featureIndices are the vertex indexes in the
ARFaceGeometry that correspond to those features (remember the magic numbers?).
You’ll notice that the “mouth” has two indexes associated with it. Since an open mouth is a hole in the mesh mask, the best way to position a mouth emoji is to average the position of the top and bottom lips.
ARFaceGeometryhas 1220 vertices, but what happens if Apple decides it wants a high resolution? Suddenly, these indexes may no longer correspond to what you expect. One possible, robust solution would be to use Apple’s Vision framework to initially detect facial features and map their locations to the nearest vertices on an
ARFaceGeometry
Next, replace your current implementation of
updateFeatures(for:using:) with the following:
// 1 for (feature, indices) in zip(features, featureIndices) { // 2 let child = node.childNode(withName: feature, recursively: false) as? EmojiNode // 3 let vertices = indices.map { anchor.geometry.vertices[$0] } // 4 child?.updatePosition(for: vertices) }
This looks very similar, but there are some changes to go over. In this code, you:
- Loop through the
featuresand
featureIndexesthat you defined at the top of the class.
- Find the the child node by the feature name and ensure it is an
EmojiNode.
- Map the array of indexes to an array of vertices using the
ARFaceGeometryproperty of the
ARFaceAnchor.
- Update the child node’s position using these vertices.
Go a head and build and run your app. You know you want to.
Blend Shape Coefficients
ARFaceAnchor contains more than just the geometry of the face. It also contains blend shape coefficients. Blend shape coefficients describe how much expression your face is showing. The coefficients range from 0.0 (no expression) to 1.0 (maximum expression).
For instance, the
ARFaceAnchor.BlendShapeLocation.cheekPuff coefficient would register
0.0 when your cheeks are relaxed and
1.0 when your cheeks are puffed out to the max like a blowfish! How… cheeky.
There are currently 52 blend shape coefficients available. Check them out in Apple’s official documentation.
Control Emoji With Your Face!
After reading the previous section on blend shape coefficients, did you wonder if you could use them to manipulate the emoji bling displayed on your face? The answer is yes. Yes, you can.
Left Eye Blink
In
updateFeatures(for:using:), just before the closing brace of the
for loop, add the following code:
// 1 switch feature { // 2 case "leftEye": // 3 let scaleX = child?.scale.x ?? 1.0 // 4 let eyeBlinkValue = anchor.blendShapes[.eyeBlinkLeft]?.floatValue ?? 0.0 // 5 child?.scale = SCNVector3(scaleX, 1.0 - eyeBlinkValue, 1.0) // 6 default: break }
Here, you:
- Use a
switchstatement on the feature name.
- Implement the
casefor
leftEye.
- Save off the x-scale of the node defaulting to 1.0.
- Get the blend shape coefficient for
eyeBlinkLeftand default to 0.0 (unblinked) if it’s not found.
- Modify the y-scale of the node based on the blend shape coefficient.
- Implement the default
caseto make the
switchstatement exhaustive.
Simple enough, right? Build and run!
Right Eye Blink
This will be very similar to the code for the left eye. Add the following
case to the same
switch statement:
case "rightEye": let scaleX = child?.scale.x ?? 1.0 let eyeBlinkValue = anchor.blendShapes[.eyeBlinkRight]?.floatValue ?? 0.0 child?.scale = SCNVector3(scaleX, 1.0 - eyeBlinkValue, 1.0)
Build and run your app again, and you should be able to blink with both eyes!
Open Jaw
Currently, in the app, if you open your mouth, the mouth emoji stays between the lips, but no longer covers the mouth. It’s a bit odd, wouldn’t you say?
You are going to fix that problem now. Add the following
case to the same
switch statement:
case "mouth": let jawOpenValue = anchor.blendShapes[.jawOpen]?.floatValue ?? 0.2 child?.scale = SCNVector3(1.0, 0.8 + jawOpenValue, 1.0)
Here you are using the
jawOpen blend shape, which is
0.0 for a closed jaw and
1.0 for an open jaw. Wait a second… can’t you have your jaw open but still close you mouth? True; however, the other option,
mouthClose, doesn’t seem to work as reliably. That’s why you’re using
.jawOpen.
Go ahead and build and run your app one last time, and marvel at your creation.
Where to Go From Here?
Wow, that was a lot of work! Congratulations are in order!
You’ve essentially learned how to turn facial expressions into input controls for an app. Put aside playing around with emoji for a second. How wild would it be to create an app in which facial expressions became shortcuts to productivity? Or how about game where blinking left and right causes the character to move and puffing out your cheeks causes the character to jump? No more tapping the screen like an animal!
If you want, you can download the final project using the Download Materials link at the top or bottom of this tutorial.
We hope you enjoyed this face-tracking tutorial. Feel free to tweet out screenshots of your amazing emoji bling creations!
Want to go even deeper into ARKit? You’re in luck. There’s a book for that!™ Check out ARKit by Tutorials, brought to you by your friendly neighborhood raywenderlich.com team.
If you have any questions or comments, please join the forum discussion below! | https://www.raywenderlich.com/5491-ar-face-tracking-tutorial-for-ios-getting-started | CC-MAIN-2019-35 | refinedweb | 3,462 | 59.09 |
Getting Started with SASS (with Interactive Examples)
Have you always wanted to learn Sass, but never quite made your move? Are you a Sass user, but feel like you could use a brush up? Well then read on, because today we are going to review the features of Sass and some of the cool things you can do with it.
What is Sass?
Sass (Syntactically Awesome Style Sheets) is a CSS preprocessor. It is to CSS what CoffeeScript is to Javascript. Sass adds a feature set to your stylesheet markup that makes writing styles fun again.
So uh, how does it work?
Funny you should ask. There are several ways you).
Which one should you use? That depends on what you are doing.
I work with large scale e-commerce codebases, so Ruby Sass is a little slow when compiling large source sets. I use node-sass in my build system, but I have to remain wary of the fact that libsass is not in 100% feature parity with Ruby Sass.
If you aren't a command line person, the GUI apps are great. You can set them up to watch scss files, so when you edit them they will compile automatically.
If you want to just screw around, or share examples, I highly recommend Sassmeister. It is a web based Sass playground that I will be using throughout this article.
Whats the deal with .sass vs .scss?
When Sass first came out, the main syntax was noticably different from CSS. It used indentation instead of braces, didn't require semi-colons and had shorthand operators. In short, it looked a lot like Haml.
Some folks didn't take too kindly to the new syntax, and in version 3 Sass changed it's main syntax to .scss. SCSS is a superset of CSS, and is basically written the exact same, but with all the fun new Sass features.
That said, you can still use the original syntax if you want to. I personally use .scss, and I will be using the .scss syntax in this article.
Why would I use Sass?
Good question. Sass makes writing maintainable CSS easier. You can get more done, in less code, more readably, in less time.
Do you need more of a reason than that?
Set Up
Without any further ado, lets get this party started. If you want to try some of these concepts while following along, either:
- Install your compilation method of choice, and create a
style.scssfile.
Or
- Follow along on Sassmeister
Variables
Thats right, variables. Sass brings variables to CSS.
Acceptable values for variables include numbers, strings, colors, null, lists and maps.
Variables in Sass are scoped using the
$ symbol. Lets create our first variable:
$primaryColor: #eeffcc;
If you tried to compile this and didn't see anything in your CSS, you're doin' it right. Defining variables on their own doesn't actually output any css, it just sets it within the scope. You need to use it within a CSS declaration to see it:
$primaryColor: #eeffcc; body { background: $primaryColor; }
Speak of the devil (scope), did you know that Sass has variable scope? Thats right, if you declare a variable within a selector, it is then scoped within that selector. Check it out:
$primaryColor: #eeccff; body { $primaryColor: #ccc; background: $primaryColor; } p { color: $primaryColor; } // When compiled, our paragraph selector's color is #eeccff
But what if we want to set a variable globally from within a declaration? Sass provides a
!global flag that comes to our rescue:
$primaryColor: #eeccff; body { $primaryColor: #ccc !global; background: $primaryColor; } p { color: $primaryColor; } // When compiled, our paragraph selector's color is #ccc
Another helpful flag, particularly when writing mixins, is the
!default flag. This allows us to make sure there is a default value for a variable in the event that one is not provided. If a value is provided, it is overwritten:
$firstValue: 62.5%; $firstValue: 24px !default; body { font-size: $firstValue; } // body font size = 62.5%
Play with some variables below to see how the Sass you are writing is compiled to CSS:
Play with this gist on SassMeister.
Math
Unlike CSS, Sass allows us to use mathematical expressions! This is super helpful within mixins, and allows us to do some really cool things with our markup.
Supported operators include:
Before moving forward, I want to note two potential "gotchas" with Sass math.
First, because the
/ symbol is used in shorthand CSS font properties like
font: 14px/16px, if you want to use the division operator on non-variable values, you need to wrap them in parentheses like:
$fontDiff: (14px/16px);
Second, you can't mix value units:
$container-width: 100% - 20px;
The above example won't work. Instead, for this particular example you could use the css
calc function, as it needs to be interpereted at render time.
Back to math, lets create a dynamic column declaration, based upon a base container width:
$container-width: 100%; .container { width: $container-width; } .col-4 { width: $container-width / 4; } // Compiles to: // .container { // width: 100%; // } // // .col-4 { // width: 25%; // }
Awesome, right? Check out in the example below how we can further leverage Sass math to add margins. Play around with the values to see our example change:
Play with this gist on SassMeister.
Functions
The best part of Sass, in my opinion, are it's built in functions. You can see the full list here. It is EXTENSIVE.
Have you ever wanted to make a cool looking button, and then taken the time to mess around on a color wheel, trying to find the right shades for 'shadowed' parts?
Enter the
darken() function. You can pass it a color and a percentage and it, wait for it, darkens your color. Check this demo out to see why this is cool:
Play with this gist on SassMeister.
Nesting
One of the most helpful, and also misused features of Sass, is the ability to nest declarations. With great power comes great responsibility, so lets take a second to realize what this does, and in the wrong hands, what bad things it could do.
Basic nesting refers to the ability to have a declaration inside of a declaration. In normal CSS we might write:
.container { width: 100%; } .container h1 { color: red; }
But in Sass we can get the same result by writing:
.container { width: 100%; h1 { color: red; } }
Thats bananas! So what if we want to reference the parent? This is achieved by using the
& symbol. Check out how we can leverage this to add pseudo selectors to anchor elements:
a.myAnchor { color: blue; &:hover { text-decoration: underline; } &:visited { color: purple; } }
Now we know how to nest, but if we want to de-nest, we have to use the
@at-root directive. Say we have a nest set up like so:
.first-component { .text { font-size: 1.4rem; } .button { font-size: 1.7rem; } .second-component { .text { font-size: 1.2rem; } .button { font-size: 1.4rem; } } }
After realizing that the second component might be used elwhere, we have ourselves a pickle. Well, not really.
@at-root to the rescue:
Play with this gist on SassMeister.
Cool huh? Nests are a really great way to save some time and make your styles readable, but overnesting can cause problems with overselection and file size. Always look at what your sass compiles to and try to follow the "inception rule".
The Inception Rule: don’t go more than four levels deep.
via
If possible, don't nest more than four levels. If you, in a pinch, have to go five levels deep, Hampton Catlin isn't going to come to your house and fight you. Just try not to do it.
Imports
Easily my second favorite part of Sass, imports allow you to break your styles into separate files and import them into one another. This does wonders for organization and speed of editing.
We can import a .scss file using the
@import directive:
@import "grids.scss";
In fact, you don't even really need the extension:
@import "grids";
Sass compilers also include a concept called "partials". If you prefix a .sass or .scss file with an underscore, it will not get compiled to CSS. This is helpful if your file only exists to get imported into a master
style.scss and not explicitly compiled.
Extends & Placeholders
In Sass, the
@extend directive is an outstanding way to inherit already existing styles.
Lets use an
@extend directive to extend an input's style if it has an
input-error class:
.input { border-radius: 3px; border: 4px solid #ddd; color: #555; font-size: 17px; padding: 10px 20px; display: inline-block; outline: 0; } .error-input { @extend .input; border:4px solid #e74c3c; }
Please note, this does not copy the styles from
.input into
.error-input. Take a look at the compiled CSS in this example to see how it is intelligently handled:
Play with this gist on SassMeister.
But what about if we want to extend a declaration with a set of styles that doesn't already exist? Meet the placeholder selector.
%input-style { font-size: 14px; } input { @extend %input-style; color: black; }
The placeholder selector works by prefixing a class name of your choice with a
% symbol. It is never rendered outright, only the result of its extending elements are rendered in a single block.
Check out below how our previous example works with a placeholder:
Play with this gist on SassMeister.
Mixins
The mixin directive is an incredibly helpful feature of Sass, in that it allows you to include styles the same way
@extend would, but with the ability to supply and interperet arguments.
Sass uses the
@mixin directive to define mixins, and the
@include directive to use them. Lets build a simple mixin that we can use for media queries!
Our first step is to define our mixin:
@mixin media($queryString){ }
Notice we are calling our mixin
media and adding a
$queryString argument. When we include our mixin, we can supply a string argument that will be dynamically rendered. Lets put the guts in:
@mixin media($queryString){ @media #{$queryString} { @content; } }
Because we want our string argument to render where it belongs, we use the Sass interpolation syntax,
#{}. When you put a variable in between the braces, it is printed rather than evaluated.
Another piece of our puzzle is the
@content directive. When you wrap a mixin around content using braces, the wrapped content becomes available via the
@content directive.
Finally, lets use our mixin with the
@include directive:
.container { width: 900px; @include media("(max-width: 767px)"){ width: 100%; } }
Check out the demo below to see how our new mixin renders media queries:
Play with this gist on SassMeister.
Function Directives
Function directives in Sass are similar to mixins, but instead of returning markup, they return values via the
@return directive. They can be used to DRY (Don't repeat yourself) up your code, and make everything more readable.
Lets go ahead and create a function directive to clean up our grid calculations from our grid demo:
@function getColumnWidth($width, $columns,$margin){ @return ($width / $columns) - ($margin * 2); }
Now we can use this function in our code below:
$container-width: 100%; $column-count: 4; $margin: 1%; .container { width: $container-width; } .column { background: #1abc9c; height: 200px; display: block; float: left; width: getColumnWidth($container-width,$column-count,$margin); margin: 0 $margin; }
Pretty cool, eh?
Demo
Now that we have all these tools at our disposal, how about we build our own configurable grid framework? Lets roll:
Lets begin by creating a map of settings:
$settings: ( maxWidth: 800px, columns: 12, margin: 15px, breakpoints: ( xs: "(max-width : 480px)", sm: "(max-width : 768px) and (min-width: 481px)", md: "(max-width : 1024px) and (min-width: 769px)", lg: "(min-width : 1025px)" ) );
Next lets write a mixin that renders our framework:
@mixin renderGridStyles($settings){ }
We are going to need to render markup for each breakpoint, so lets iterate through our breakpoints and call our media mixin. Lets use the
map-get method to get our breakpoint values, and our
@each directive to iterate through our breakpoints:
@mixin renderGridStyles($settings){ $breakpoints: map-get($settings, "breakpoints"); @each $key, $breakpoint in $breakpoints { @include media($breakpoint) { } } }
We need to render the actual grid markup within our iteration, so lets create a
renderGrid mixin. Lets use the
map-get method to get our map values, and our
@while directive to iterate through columns with
$i as our index. We render our class name using interpolation.
@mixin renderGrid($key, $settings) { $i: 1; @while $i <= map-get($settings, "columns") { .col-#{$key}-#{$i} { float: left; width: 100% * $i / map-get($settings,"columns"); } $i: $i+1; } }
Next, lets add container and row styles:
.container { padding-right: map-get($settings, "margin"); padding-left: map-get($settings, "margin"); margin-right: auto; margin-left: auto; } .row { margin-right: map-get($settings, "margin") * -1; margin-left: map-get($settings, "margin") * -1; }
It's alive! Check out the demo of our framework below:
Play with this gist on SassMeister.
Wrap Up
You may reach this point and think that we have covered quite a bit of Sass, but really it is just the tip of the iceberg. Sass is an extremely powerful tool that you can do some really incredible things with.I look forward to following up with an article on advanced concepts, but until then Happy Sassing and check out some of the resources below:
Resources
- The Sass Way - A phenomenal source of Sass tutorials.
- Hugo Giraudel - An amazing tech writer & Sass wizard with a keen focus on Sass.
- SassNews - A twitter account managed by Stuart Robson that will keep you in the know. | https://scotch.io/tutorials/getting-started-with-sass | CC-MAIN-2018-43 | refinedweb | 2,258 | 64.91 |
In this tutorial we will be going over how you can program a button to distinguish a long press versus a short press.!
The following code distinguishes between short presses, long presses and hold presses of buttons. For our code we decided that a short press is less than 300 milliseconds while a long press is between 300 milliseconds and 2000 milliseconds and a hold is greater than 2000 milliseconds. You can adjust these values as you see fit by changing the variables at the top of the code "shortPress" and "longPress" follow along through the comments in the code to understand what is going on.
/*
This code distinguises between long and short presses of a button to give your buttons more use!
*/
#include <WiFi.h>
#include <Wia.h>
const int button = 17;
int pressed = 0;
int released = 0;
int timeElapsed;
int shortPress = 300; // this means that all short presses must be less that 300ms
int longPress = 2000; // this means that all long presses must be less that 2000ms but greater than 300ms
Wia wiaClient = Wia();
void setup() {
WiFi.begin();
delay(2500);
pinMode(button, INPUT_PULLDOWN);
}
void loop() {
if(digitalRead(button) == HIGH)
{
pressed = millis(); // this finds the start time of when the button is pressed
while(digitalRead(button) == HIGH)
{// for as long as the button is pressed this will loop and the released time is recorded
released = millis();
}
timeElapsed = released - pressed; // this calculates the time the button was pressed for
if(timeElapsed < shortPress)
{/* if the time the button was pressed is less than the time defined to be a short press then we create
an event "shortPress" */
wiaClient.createEvent("shortPress");
}
else if(timeElapsed > shortPress && timeElapsed <longPress)
{/* if it is greater than shorPress but less than longPress's upper bound, then we can conlude it was a
long press and create an event "longPress"*/
wiaClient.createEvent("longPress");
}
else
{// if it is greater than the upper bound of long press than we can consider it a hold
wiaClient.createEvent("holdPress");
}
delay(100);// this delay keeps you from registering two short presses by accident
}
}
Then deploy your code to your button and you're all set! Congratulations you have made a single button have three different purposes!
If you want to learn more about how to use your new button skills with Wia's IoT devices check out some of our other projects and tutorials here. | https://community.wia.io/d/98-how-to-make-a-button-multifunction | CC-MAIN-2020-10 | refinedweb | 392 | 59.47 |
Greetings,
I am creating an applet to draw a shape based on the selected item within the awt choice object.
It currently draws the selected shape based on selection and quickly dissappears.
I believe that I should be doing all drawing elements from the paint method (which I am not).
I have created seperate methods to draw the element when the item state has changed.
Can someone please help explain how I can draw the shapes and when I need to repaint (what am I doing wrong?) .
Thanks
Rob
package wk8exercise3; import java.applet.Applet; import java.awt.*; import java.awt.Graphics; import java.awt.event.*; /** * * @author Rob */ public class wk8Exercise3 extends Applet implements ItemListener { Choice myChoice; int rectX; int rectY; int rectWidth ; int rectHeight; String shape; public void init() { // Create the choice and add some choices myChoice = new Choice(); myChoice.addItem("Pick a shape to draw"); myChoice.addItem("Draw a rectangle"); myChoice.addItem("Draw a Line"); myChoice.addItem("Draw an Oval"); add(myChoice); myChoice.addItemListener(this); } public void itemStateChanged (ItemEvent e) { // Declare integer for use with index of choice int Selection; Selection = myChoice.getSelectedIndex(); // Declare variables to hold paramater to be used to draw selected item if (Selection == 1) { drawARectangle(50,50,100,100); } if (Selection == 2) { drawALine(50,50,200,50); } if (Selection == 3) { drawAnOval(50,50,200,50); } } public void paint(Graphics g) { // Not sure what to do here } public void drawARectangle(int RectX, int RectY, int RectWidth, int RectHeight) { repaint(); Graphics g = getGraphics(); g.drawRect(RectX, RectY, RectWidth, RectHeight); } public void drawALine(int lineX1, int lineY1, int lineX2, int lineY2) { repaint(); Graphics g = getGraphics(); g.drawLine(lineX1,lineY1,lineX2,lineY2); } public void drawAnOval(int ovalX, int ovalY, int ovalWidth, int ovalHeight) { repaint(); Graphics g = getGraphics(); g.drawOval(ovalX, ovalY, ovalWidth, ovalHeight); } } | https://www.daniweb.com/programming/software-development/threads/184423/java-applet-draw-graphics-based-on-java-awt-choice | CC-MAIN-2021-17 | refinedweb | 295 | 57.37 |
.NET Framework: Use Your Own Cache Wrapper to Help Performance
Introduction
Welcome to this installment of the .NET Nuts & Bolts column. In this article we're going to explore a technique that I've seen used a number of times to help improve performance of repetitive data retrieval. We'll explore it from a web application point of view and a traditional windows application point of view. It will involve building some custom objects to hold reference or other lookup data.
Common Data
First we'll start with an explanation of what I consider to be common data. This doesn't mean it is the only type of data you could use this for, but rather that it is data that I commonly do use it for. I tend to look for any data that I store as a key value pair in a database where I might want to retrieve it and store it in a dictionary or hashtable object. It is extremely common to store data in an abbreviated form, but yet present it in a different manner. For example, state codes such as IN are commonly stored in the database, but the full Indiana may be used in the display. I also consider application configuration settings to be common data.
Building Your Container
After having identified your common data, the next thing we need to consider is the container to hold it around. I tend to consider myself a pragmatic programmer, meaning, not over and not under engineering things. Below you will find a very basic object that I commonly use for holding application configuration settings. It works for all kinds of stuff. In this case it presents an array that can be indexed by a name, which in turn just goes and checks a hash table for the requested value.
public class AppSettings { // This indexer is used to retrieve AppSettings from Memory. public string this[string name] { get { if (string.IsNullOrEmpty(name)) return string.Empty; // Make sure we have an AppSettings cache item loaded if (HttpContext.Current.Cache["AppSettings"] == null) { App.AppSettings.LoadAppSettings(); } Hashtable ht = (Hashtable)HttpContext.Current.Cache[settingKey]; if (ht.ContainsKey(name)) { if (ht[name] != null) { return ht[name].ToString(); } return string.Empty; } else return string.Empty; } } // This Method is used to load the app settings from the database into memory. public static void LoadAppSettings() { Hashtable ht = new Hashtable(); // Code goes here to get your data from a database. // my omitted code used LINQ to Entities to populate a results object // based on my entity definition foreach (var appSetting in results) { if (ht.ContainsKey(appSetting.vc_Name)) { ht.Remove(appSetting.vc_Name); ht.Add(appSetting.vc_Name, appSetting.vc_Value } else { ht.Add(appSetting.vc_Name, appSetting.vc_Value); } } // Remove the items from the cache if they are already there if (HttpContext.Current.Cache[settingsKey] != null) HttpContext.Current.Cache.Remove(settingsKey); // Add it into Cache HttpContext.Current.Cache.Add(settingsKey, ht, null, System.Web.Caching.Cache.NoAbsoluteExpiration, new TimeSpan(1, 0, 0), System.Web.Caching.CacheItemPriority.NotRemovable, null); } } }
The following example code represents the simple call you would make to get an item from your application settings. You may want to consider making it a static object as well to avoid some slight goofy syntax for when you use it, which you'll see following the example.
string siteDisplayName = new AppSettings()["Site.Display.Name"]); string redirectURLWithoutWWW = new AppSettings()["Site.URL.Without"]);
Using it in a Windows Application
The example I gave above was implemented from an ASP.NET framework perspective as it used the ASP.NET cache mechanism to persist the data container. Within a Windows application I've traditionally just used a static object. In the ASP.NET application you have to worry about the item persisting across round trips to the server, which is why the cache is used since that is the storage mechanism for that data. In a windows application you don't have that issue and a good old static object should do the trick. An adjustment you could consider when using it for a windows application is to use the
WeakReference object to allow your objects to still be cleaned up by the garbage collector if needed, and have a mechanism to pull it again from the source to put it back. This may be another future article topic.
Multiple Tenant
Some additional food for thought around the application settings above are in regard to multiple tenant / instance applications. For example, a web site that presents different faces and data, but under the same code base and hardware would be a multiple instance application. By introducing a tenant / instance ID into the application settings above, we can store all the settings in one central location, but yet keep them separate. It is even possible to use this to build a hierarchy of permissions where there are default values that can easily be overridden in each instance. It's likely the topic of my next article.
Summary
We have looked at what I consider to be common data and some techniques for caching it to have it readily available in your applications. It can be particularly useful for configuration settings as seen in the highlighted examples.
Future Columns
The topic of the next column is yet to be determined. If you have something else in particular that you would like to see explained here you could reach me through.
Related Articles
- .NET Framework: Monitor a Directory for a New File and Use FTP Upload to Copy It
- Consuming an RSS Feed with the .NET Framework
- Introduction to Parallel Programming in the .NET Framework | https://www.codeguru.com/csharp/article.php/c18333/NET-Framework-Use-Your-Own-Cache-Wrapper-to-Help-Performance.htm | CC-MAIN-2019-35 | refinedweb | 928 | 56.76 |
Razorberry's Adobe Flash Blog since 2004! 2009-02-19T16:24:04Z hourly 1 2000-01-01T12:00+00:00 MAX thoughts 2008-11-28T22:43:28Z Ash <![CDATA[General]]> MAX North America finished over a week ago, and I have to say, it was an invaluable experience. If you ever have the chance to convince your boss to let you go, I would definitely recommend it. If not for the stuff you might learn or pick up in the sessions or labs, but for [...] World of Cars 2008-10-07T20:42:40Z Ash <![CDATA[General]]> [...] AS3: Removing duplicates from an array 2008-09-04T03:22:17Z Ash <![CDATA[Actionscript]]> Just for shiggles (yes, I said it.), removing duplicate items from an array in one line of ActionScript 3: var arr:Array = ["a","b","b","c","b","d","c"]; var z:Array = arr.filter(function (a:*,b:int,c:Array):Boolean { return ((z ? z : z = new Array()).indexOf(a) >= 0 ? false : (z.push(a) >= 0)); }, this); trace(z); AIR in Action - Now in stores! 2008-08-18T00:35:15Z Ash <![CDATA[General]]> [...] Tiny audio streamer 2008-06-06T15:03:42Z Ash <![CDATA[General]]> I found this javascript-controlled audio streamer I made a while ago. It weighs in at a massive 1.2k and is very basic. Maybe you could make it a personal challenge to make it smaller! Download the swf here. (right click & save). Source (AS2): Main.as MiniStreamer.as How to use: Basically you need to embed the swf at whatever size you would [...] Messing around with papervision again 2008-06-05T21:53:32Z Ash <![CDATA[Actionscript]]> Nothing too spectacular, but I threw together a new pointless animation for my front page. See it here. One day I will have enough varied content to warrant a menu or something. Here is the source, for learning purposes. It won't compile without a couple of files, but you can get the idea Razor Component Trac now up and running 2008-04-22T21:04:34Z Ash <![CDATA[Hot]]> After a week of inserting asdoc comments, I uploaded the source to the Razor Component Framework to my public svn. The components also have a new project page, at: The ticket system is open to submission as long as it doesn't get spammed, and I will also be adding more documentation and examples on the wiki [...] Announcing the Razor Component Framework 2008-04-08T20:12:42Z Ash <![CDATA[General]]> I've posted about the components I've been working on as a side project for a long while now. Finally, I'm just about ready to release them to the public in a beta state. The Razor Component Framework is intended to be a lightweight, yet feature-rich alternative to the mx framework for Flash, Flex and AIR. It [...] AS3 Dynamic Speech Bubble Snippet 2008-01-29T03:25:27Z Ash <![CDATA[Actionscript]]>: AS: SpeechBubble.drawSpeechBubble(target:Sprite, [...] I am not Singularity.. 2008-01-26T19:50:39Z Ash <![CDATA[Discussion]]> ..until I find out what it is. But kudos to Aral for his viral marketing strategy | http://feeds.feedburner.com/Razorberry | crawl-002 | refinedweb | 515 | 68.67 |
WARNING
Since version 1.5 the pyDoubles API is provided as a wrapper to doublex. However, there are small differences. pyDoubles matchers are not supported anymore, although you may get the same feature using standard hamcrest matchers. Anyway, old pyDoubles matchers are provided as hamcrest aliases, so your old pyDoubles tests should work fine with minimal changes.
In most cases the only required change in your code is the module name, that change from:
import pyDoubles.framework.*
to:
from doublex.pyDoubles import *
If you have problems migrating from pyDoubles to doublex, please ask for help in the discussion forum or in the issue tracker.
Documentation is available at. | https://bitbucket.org/carlosble/pydoubles/src | CC-MAIN-2017-43 | refinedweb | 108 | 59.6 |
During the last three years I have reviewed many pull requests of React applications. I consistently observed in different developers some practices that could be improved just by keeping in mind the following sentence:
I am writing code for other people.
Why write code for humans?
Either if you are writing enterprise applications or creating an open source project, your code is going to be read and maintained by humans. This is a mantra you must always keep in mind.
Some readers may pose that code is run on machines, so if the code is not efficient you cannot consider it good code. That's a good point, but if the code is readable bot not efficient, it will be easier to understand where to change it to make it faster.
Good code that is developer-friendly has several advantages.
It is more pleasant to read and easier to understand.
Reduces onboarding time. Development teams sometimes need more capacity, so new staff or consultants may join the team. On those cases, human centered code makes on boarding much smoother and less time costly.
Takes less time to maintain. It is very common to spend a lot of time in an application/library, then you release it, and, for a while, you don't modify it. One day, after some months you need to change something and... guess what, now you don't remember what you did, so you need to read your own code.
Dos and don'ts to make : Recipes / Tips
We'll start with some general JavaScript tips and then move to more specific tips for React.
Do use significant names in variables.
Whenever you create a variable ask yourself: Does the name of a variable convey what is the content of the variable?
In general, follow these rules:
- Use the shortest name,
- But also be as precise as possible.
// ❌ Not good const list = ['USA', 'India', 'Peru', 'Spain'] list.map(item => console.log(item)) // ✅ Better const countries = ['USA', 'India', 'Peru', 'Spain'] countries.map(country => console.log(country))
In general do not use generic names such as
list,
item, they are short but not very meaningful. A list can contain anything and it will not give any clue about its contents to the reader of your code. A more precise name, such as
countriesin the example above, is better.
Also, I personally prefer to avoid acronyms in variables as they may be harder to understand for junior/new developers.
// ❌ Not that good const handleClk = e => { console.log("User clicked the button" + e.current.value) } // ✅ Better const handleClick = event => { console.log("User clicked the button" + event.current.value) }
This "rule" makes the code more verbose but also easier to understand.
In other languages like Python it is common to use acronmys/abreviated versions - for example when importing modules - which is somewhat fine as these are widely spread conventions across existing documentation, examples and even novice learners.
# Typical way of renaming modules in python import numpy as np import tensorflow as tf import seaborn as sns
The rational of this convention is to type less, be more productive (Now with autocomplete of the editors is no longer true), make the code less verbose and "faster" to read for expert eyes.
Following this idea, there may be cases in JavaScript in which you use shorter versions, for example:
// doc instead of document const doc = createNewDocument()
As summary, do give some thought when naming variables in your code. I believe this is one of the hardest part of software development and it differentiates good developers from bad developers.
Do use consistent names across the app.
Give good names to variables is not enough, they have to be consistent across the whole react app.
To solve complex problems we create small independent logic units. We follow the strategy of divide and conquer to make it easier. We implement components in an isolated way, they have some inputs and throw some output. However, we should not forget these units belong to a higher order organism, your application.
Ask yourself upon creating a variable, function, component or a file, if its name is consistent with the names already used in the application. Example:
// ❌ Not that good //File1.jsx const sectorsData = useSelector(sectorsSelector) //File2.jsx const sectorsList = useSelector(sectorsSelector) // ✅ Better //File 1 const sectors = useSelector(sectorsSelector) //File 2 const sectors = useSelector(sectorsSelector)
For files:
/redux/constants/<entity>Constants.js
/redux/actions/<entity>Actions.js
/redux/selectors/<entity>Selector.js
- etc..
Do follow the Don't repeat yourself (DRY) principle.
That is, if you see that your are repeating similar code or logic in two places, refactor that code to use a function, component, etc.
// ❌ Not that good const getPdfName = (country) => { const now = new Date() const pdfName = `${country}-${now.getFullYear()}-${now.getMonth()}-${now.getDay()}.pdf` return pdfName } const getExcelName = (country) => { const now = new Date() const xlsName = `${country}-${now.getFullYear()}-${now.getMonth()}-${now.getDay()}.xls` return xlsName } // ✅ Better const buildFilename = (name, extension) => { const now = new Date() return `${name}-${now.getFullYear()}-${now.getMonth()}-${now.getDay()}.${extension}` } const gePdfName = (country) => { return buildFileName(country, '.pdf') } const getExcelName = (country) => { return builExcelName(country, '.xls') }
Do keep files short
I use 200 lines as a benchmark. Specially when we talk about React components, if you have a file that has more than 200 lines, ask yourself if you can split it in smaller components.
Also, if the large majority of your component code is for fetching and processing data, think about moving that code to support/helper files. For example, you can create a folder
/src/lib/ and keep there your utility functions.
Also, it is not advisable to have more than a certain amount of files (~10-20) in the same folder. Structuring the folder into sub-folders makes the project more readable.
Do create a compact code.
// ❌ Not that good const handleClick = newValue => { const valueAsString = newValue.toString() if (onClick !== undefined) { onClick(valueAsString) } }; // ✅ Better // Previous code in 1 single line. const handleClick = newValue => onClick && onClick(newValue.toString())
Although compact code as a general principle is good, it may sometimes obfuscate what is the code actually doing. So:
Do document your code.
Specially for helper functions the interface needs to be clear.
Do include comments for pieces of code that may not be very obvious. Example:
// ❌ Not that good editor.countWorths= nodes => { const content = editor.serialize(nodes); return content.length ? content.match(/\b[-?(\w+)?]+\b/gi).length : 0; } // ✅ Better /** * Counts the number of words within the passed nodes * * @param {Node} SlateJS nodes * @returns {integer} Number of words */ editor.countWords = nodes => { const content = editor.serialize(nodes); // one string with all node contents //Extracts number of words with the regex unless empty string (0) return content.length ? content.match(/\b[-?(\w+)?]+\b/gi).length : 0; };
Do use linters and code formatters
Linters are code analyzers that provide estilistic suggestions. The most widely spread in Javascript is esLint. Setting it up in a react application is pretty easy.
The other tool that will make your code more readable and save you time is a code formatter. It will indent and break the lines of your code. It will really make your code much easier to read and will save you time. In JavaScript we are lucky, we have prettier that formats your code on save.
Do use
on and
handle as prefix on event props and handlers
This is a de facto standard on React naming conventions. It is widely used on the official react documentation and gives the reader a cue on what is the prop for.
For event props use
on as prefix (for instance,
onClick,
onSubmit,
onBlur).
For the handlers of those events use the prefix
handle (for instance,
handleClick,
handleSubmit,
handleBlur).
// ❌ Not that good export default function SendEmailForm (sendEmail) { /// process / validate email form sendEmailWasClicked(event) { sendEmail && sendEmail(formFields) } return( <form> ... <input type="submit" onClick={sendEmailWasClicked}> Send email </input> ... </form> ) // ✅ Better export default function SendEmailForm (onSendEmail) { handleSubmit(email) { // process email info // ... // onSendEmail && onSendEmail(email) } return( <form> ... <input type="submit" onClick={handleSubmit()}> Send email </input> ... </form> )
Do not add handler code in the render
In my experience it makes the code harder to read when the logic of the handler is within the render.
// ❌ Not that good <button onClick={() => { if (name==='') { setError("Name is mandatory") return } if (surname==='') { setError("Name is mandatory") return } onSubmit && onSubmit({name, surname}) }}>Submit</button> // ✅ Better const handleOnSubmit = () => { if (name === '') { setError("Name is mandatory") return } if (surname === '') { setError("Surname is mandatory") return } onSubmit && onSubmit({name, surname}) } ... return( ... <button onClick={handleOnSubmit}>Submit</button> ... )
One liners may be ok to make code more compact.
Example:
// ✅ This is ok return ( <button onClick={() => onCancel && onCancel()}> Cancel </button> )
Do use
const by default
Whenever you create a variable use
const by default. Use
let
only when it is going to be assigned several times. Avoid
var.
It will save you some hard to find bugs.
// ❌ Not that good let today = new Date() // Today 99.9999999% won't be reasigned // ✅ Better const today = new Date()
Note that you assign a variable when the
name is in front of an
=. So you can modify Arrays and Objects as constants.
// ✅ This will run let day = new Date() day = new Date() // ❌ It will not run const day = new Date() day = new Date() // you cannot reasign a const // ✅ This will run const myObject = { a: 'prop created during assignment' } myObject.b = {b: 'object content can be modified after creation'} const animals = [ 'dog', 'cat'] animals.push('lion')
Only when you put a
const before
= more than once, the code won't run.
Do use the best maping function in arrays
- Use
map()for returning an array with the same number of elements.
const numbers = [1, 2, 3] const double = numbers.map( number => (2 * number)) // [2, 4, 6]
Use
filter()for returning the items that match a criterium.
const numbers = [1, 2, 3] const double = numbers.filter( number => (number > 1)) // [2, 3]
Use
find()for searching the first item that matches a cirterium.
const numbers = [1, 2, 3] const double = numbers.find( number => (number > 1)) // [2]
Use
forEach()for not returing an array.
const list = [1, 2, 3] let sum = 0 list.forEach( number => sum += number) // 6
Do handle situations in which there is no value
Example:
// ❌ Not that good export default function MyForm(value, onSubmit) { //... const handleOnSubmit => (newValue) => { // do whatever other transformations onClick(newValue) } //... return ( {/* this assumes input handles null or empty values correctly */} <Input value={value} /> <Button onSubmit={handleOnSubmit}>Submit</Button> } // ✅ Better const default function MyForm(value, onSubmit) { //... const handleOnSubmit = () => { // It won't do anything BUT won't crash. onClick && onClick(values) } //... }
Example 2:
// ❌ Not that good export default function IndicatorsList({sectors}){ return( <ul> {sector.indicators.map(indicator => <li key={indicator.id}>{indicator.text}</> )} </ul> } // ✅ Better //It receives the indicator list export default function IndicatorsList({indicators}) { indicators = indicators || [] (indicators.length == 0) ? ( <p>No indicators</p> ) : ( <ul> {indicators.map ( indicator => <li key={indicator.id}>{indicator.text}</> )} <ul> )
Be consistent on the order in which you write the code.
Always follow the same order of the imports, variables and functions within the code of the components.For example, I like the folowing order:
- imports
- state, variables and constants
useEffects
- effect handlers (
handleOnClick, etc.)
return()function
- Prop defaults and PropTypes
Indeed, for the imports, you may even define an actual order:
- React related stuff
- General such as react-router
- External UI related components
- Redux actions, selectors
- Hooks
- Custom Application UI components
Do add validations for fields and handle form errors.
Gererally, when you read a tutorial or watch a video that teaches react or any other library/programming language, they do not manage errors other than display a console message. Their code is simple, but in real applications user may fill unexpected data, there may be network errors, API may have bug, the user may not have permissions to access a resource, or your authentication token may have expired. Your code has to manage all these situations gracefully and display the appropriate feedback to the user so he can recover from them.
Types of errors and how to manage them from the user experience and from the code point of view is something that requires a deep dive, but we will leave that for another article.
Wrapping up
Always keep in mind:
I write code for other people.
So always try to think if a reader would understand it. Code being consistent, using meaninful variables, document the code, and follow some wide spread conventions. developer (human) friendly code will be much easier to maintain, less prone to errors and if a new team member joins, she/he will be on boarded and productive in less time.
Note that the above mentioned dos and don'ts are general guidelines, and some of the recommendationsmay have corner cases in which you can argue not to follow them, in those cases use your common sense.
Discussion (1)
99% of my job as a developer: naming things. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/merlos/write-code-for-people-dos-and-donts-to-improve-your-react-code-59j7 | CC-MAIN-2022-33 | refinedweb | 2,137 | 56.25 |
28837/how-to-mix-read-and-write-on-python-files-in-windows
It appears that a write() immediately following a read() on a file opened with r+ (or r+b) permissions in Windows doesn't update the file.
Assume there is a file testfile.txt in the current directory with the following contents:
This is a test file.
I execute the following code:
with open("testfile.txt", "r+b") as fd:
print fd.read(4)
fd.write("----")
I would expect the code to print This and update the file contents to this:
This----a test file.
This works fine on at least Linux. However, when I run it on Windows then the message is displayed correctly, but the file isn't altered - it's like the write() is being ignored. If I call tell() on the filehandle it shows that the position has been updated (it's 4 before the write() and 8afterwards), but no change to the file.
However, if I put an explicit fd.seek(4) just before the write() line then everything works as I'd expect.
Does anybody know the reason for this behaviour under Windows?
For reference I'm using Python 2.7.3 on Windows 7 with an NTFS partition.
EDIT
In response to comments, I tried both r+b and rb+ - the official Python docs seem to imply the former is canonical.
I put calls to fd.flush() in various places, and placing one between the read() and the write()like this:
with open("testfile.txt", "r+b") as fd:
print fd.read(4)
fd.flush()
fd.write("----")
... yields the following interesting error:
IOError: [Errno 0] Error
EDIT 2
Indirectly that addition of a flush() helped because it lead me to this post describing a similar problem. If one of the commenters on it is correct, it's a bug in the underlying Windows C library.
Firstly we will import pandas to read ...READ MORE
Use the traceback module:
import sys
import traceback
try:
...READ MORE
Is it easy to read a line ...READ MORE
Hi, there is a very simple solution ...READ MORE
You know what has worked for me ...READ MORE
Here's the short answer:
disable any antivirus
OR | https://www.edureka.co/community/28837/how-to-mix-read-and-write-on-python-files-in-windows | CC-MAIN-2019-22 | refinedweb | 369 | 75.61 |
The
lang attribute specifies the primary language used in contents and attributes containing text content of particular elements.
There is also an
xml:lang attribute (with namespace). If both of them are defined, the one with namespace is used and the one without is ignored.
In SVG 1.1 there was a
lang attribute defined with a different meaning and only applying to
<glyph> elements. That attribute specified a list of languages in BCP 47 format. The glyph was meant to be used if the
xml:lang attribute exactly matched one of the languages given in the value of this parameter, or if the
xml:lang attribute exactly equaled a prefix of one of the languages given in the value of this parameter such that the first tag character following the prefix was "-".
All elements are using this attribute.
<svg viewBox="0 0 200 100" xmlns=""> <text lang="en-US">This is some English text</text> </svg>
Usage notes
<language-tag>
This value specifies the language used for the element. The syntax of this value is defined in the BCP 47 specification.
The most common syntax is a value formed by a lowercase two-character part for the language and an uppercase two-character part for the region or country, separated by a minus sign, e.g.
en-USfor US English or
de-ATfor Austrian German.
Specifications
Browser compatibility
Legend
- Compatibility unknown
- Compatibility unknown | https://developer.mozilla.org/ru/docs/Web/SVG/Attribute/lang | CC-MAIN-2020-45 | refinedweb | 234 | 54.42 |
How to disable default translation values in Django?
Some tags give me word translation without setting up the * .po file.
{% trans "groups" %} {% trans "users" %}
Unfortunately, they will not be overridden when the * .po file is created and run:
django-admin.py compilemessages
So how do I get rid of the default translations? I would prefer a project level solution because I don't want to change the main Django files.
+3
source to share
2 answers
There are several ways to override it
- set your language path to LOCALE_PATHS in the preferences file, this will give your translations a higher priority.
- Modify the file which is different from the one Django is using. Then specify the translations for the languages to be used. Msgid can be anything plus the base string as long as it is unique and translatable, such as a namespace prefix:
{% trans "my:groups" %}
- Context markers for Django1.3 + , then it looks like
{% trans "groups" context "my" %}
+5
source to share
I made it easier. Instead of setting the language as en, fr, ru and else, I add the 't_' prefix, so I use po from dirs like t_en, t_ru, t_fr
-1
source to share | https://daily-blog.netlify.app/questions/1897312/index.html | CC-MAIN-2021-43 | refinedweb | 198 | 72.05 |
C# Programming > Windows Forms
The user interface of a C# application is an important part of a program. A user-friendly interface keeps things organized. Having buttons and textboxes all over the place makes the C# program look messy and unprofessional.
As some of you may know, Visual Studio has a menu named Format which gives several commands to align different controls. This isn't C# code by the way, this is still in the design-time of the application.
The tools under the Format menu are excellent and they really make it simple to create a beautiful, clean user interface in C#.Net. But what happens when you have dynamic elements in the user interface? Say for example, that a Label changes its text at run-time but you want to keep it centered in the Form.
In order to use the same tools from the Format menu, we are going to have to write the equivalent funcitonality in C# code, so it can be called programmatically at run-time. Programmatically aligning user controls allows programmers to create a dynamic user interface in C#.
The functions themselves will not be very difficult or complex. They just have to be well organized and in this case, be static functions to make it possible to call them in C# without having to intialize a class.
So for example, here's a code-snippet from the Align C# class:
public class UI { public static class CenterInForm { public static void Horizontally(Form parentForm, Control control) { Rectangle surfaceRect = parentForm.ClientRectangle; control.Left = (surfaceRect.Width / 2) - (control.Width / 2); } public static void Vertically(Form parentForm, Control control) { Rectangle surfaceRect = parentForm.ClientRectangle; control.Top = (surfaceRect.Height / 2) - (control.Height / 2); } } }
Notice that the funcitons have a lot of room for improvement. Centering does not have to be limited to a parent Form, it can be applied to any other container, such as a GroupBox or a Panel.
Download the full source code down below to get the rest of the aligning user-interface functions. They all work based on two user interface elements, but you could easily modify it to work with an array of .NET controls instead. Used correctly you can design a dynamic user interface in C# that stays neat and organized... | http://www.vcskicks.com/align-user-interface.php | CC-MAIN-2018-13 | refinedweb | 378 | 54.93 |
hello
i am making a program where the user enters the hours and minutes in military time(24hrs) and the program will output the time in regular time(12hrs). i need to use funtions, one for the input, one for the coverting, and one for the output. im new to funtions and cant seem to get it to work. i can input the time fine but the program will not convert or output the right time. it keeps saying the variable afternoon is being used without being initialized, and then it outputs the same time that i input. can i get some hints on what may be wrong or something.
thanks
Code:#include<iostream> using namespace std; void military(int& mhour,int& mmin); void convert(int mhour, int& hours, char afternoon); void print(int hours,int mmin,char afternoon); int main() { int mhour,mmin,hours; char afternoon; military(mhour,mmin); convert(mhour,hours,afternoon); print(hours,mmin,afternoon); system("pause"); return 0; } void military(int& mhour,int& mmin) { cout<<"enter the hours in 24 hour format<0-24>\n"; cin>>mhour; cout<<"enter the minutes in 24 hour forman<00-59>\n"; cin>>mmin; } void print(int hours, int mmin,char afternoon) { cout<<"The time in 12 hour format is "<<hours<<":"<<mmin<<" "<<afternoon<<"m \n"; } void convert(int mhour, int& hours, char afternoon) { if(mhour>12) { hours=mhour; afternoon='A'; } if(mhour<=13) { hours=mhour-12; afternoon='P'; } } | http://cboard.cprogramming.com/cplusplus-programming/131275-military-time-converter-using-functions.html | CC-MAIN-2014-52 | refinedweb | 238 | 52.43 |
If you've been around the internet lately, you've most likely seen a nice subtle loading animation that fills page content before gracefully loading in.
Some of the social giants like Facebook even use this approach to give page loading a better experience. How can we do that with just some simple CSS?
- What are we going to build?
- Just want the snippet?
- Part 1: Creating our loading animation
- Part 2: Using our loading animation in a dynamic app
What are we going to build?
We're going to create a loading animation using a CSS class that you can apply to pretty much any element you want (within reason).
This gives you great flexibility to use it and makes the solution nice and simple with only CSS.
While the snippet is pretty small and you could just copy and paste it, I'll walk you through what's happening and an example of using it dynamically when loading data.
Just want the snippet?
You can grab it here!
Do I need to know how to animate before this tutorial?
No! We'll walk through in detail exactly what you need to do. In fact, the animation in this tutorial is relatively simple, so let's dig in!
Part 1: Creating our loading animation
This first part is going to focus on getting the loading animation together and seeing it on a static HTML website. The goal is to walk through actually creating the snippet. We'll only use HTML and CSS for this part.
Step 1: Creating some sample content
To get started, we'll want a little sample content. There's really no restrictions here, you can create this with basic HTML and CSS or you can add this to your Create React App!
For the walk through, I'm going to use HTML and CSS with a few examples of content that will allow us to see this in effect.
To get started, create a new HTML file. Inside that HTML file, fill it with some content that will give us the ability to play with our animation. I'm going to use fillerama which uses lines from my favorite TV show Futurama!
If you're going to follow along with me, here's what my project looks like:
my-css-loading-animation-static - index.html - main.css
Follow along with the commit!
Step 2: Starting with a foundation loading class
For our foundation, let's create a new CSS class. Inside our CSS file, let's add:
With that class, let's add it to a few or all of our elements. I added it to a few paragraphs, headings, and lists.
<p class="loading">For example...
That gives us a basic background, but we'd probably want to hide that text. When it's loading, we won't have that text yet, so most likely we would want to use filler text or a fixed height. Either way, we can set the color to transparent:
If you notice with list elements, whether you apply the class to the top level list element (
<ol> or
<ul>) vs the list item itself (
<li>), it looks like one big block. If we add a little margin to the bottom of all list elements, we can see a different in how they display:
li { margin-bottom: .5em; }
And now it's starting to come together, but it kind of just looks like placeholders. So let's animate this to look like it's actually loading.
Follow along with the commit!
Step 3: Styling and animating our loading class
Before actually animating our class, we need something to animate, so let's add a gradient to our
.loading { color: transparent; background: linear-gradient(100deg, #eceff1 30%, #f6f7f8 50%, #eceff1 70%); }
This is saying that we want a linear gradient that's tilted at 100 degrees, where we start with
#eceff1 and fade to
#f6f7f8 at 30% and back to
#eceff1 at 70%;
It's hard to see initially when it's still, it might just look like a glare on your computer! If you'd like to see it before moving on, feel free to play with the colors above to see the gradient.
Now that we have something to animate, we'll first need to create a keyframes rule:
@keyframes loading { 0% { background-position: 100% 50%; } 100% { background-position: 0 50%; } }
This rule when applied will change the background position from starting at 100% of the x-axis to 0% of the x-axis.
With the rule, we can add our animation property to our
.loading { color: transparent; background: linear-gradient(100deg, #eceff1 30%, #f6f7f8 50%, #eceff1 70%); animation: loading 1.2s ease-in-out infinite; }
Our animation line is setting the keyframe to
loading, telling it to last for 1.2 seconds, setting the timing function to
ease-in-out to make it smooth, and tell it to loop forever with
infinite.
If you notice though after saving that, it's still not doing anything. The reason for this is we're setting our gradient from one end of the DOM element to the other, so there's nowhere to move!
So let's try also setting a
background-size on our
.loading { color: transparent; background: linear-gradient(100deg, #eceff1 30%, #f6f7f8 50%, #eceff1 70%); background-size: 400%; animation: loading 1.2s ease-in-out infinite; }
Now, since our background expands beyond our DOM element (you can't see that part), it has some space to animate with and we get our animation!
Follow along with the commit!
Part 2: Using our loading animation in a dynamic app
Now that we have our loading animation, let's put it into action with a basic example where we fake a loading state.
The trick with actually using this is typically we don't have the actual content available, so in most cases, we have to fake it.
To show you how we can do this, we're going to build a simple React app with Next.js.
Step 1: Creating an example React app with Next.js
Navigate to the directory you want to create your new project in and run:
yarn create next-app # or npm init next-app
It will prompt you with some options, particularly a name which will determine the directory the project is created in and the type of project. I'm using
my-css-loading-animation-dynamic and the "Default Starter App".
Once installed, navigate into your new directory and start up your development server:
cd [directory] yarn dev # or npm run dev
Next, let's replace the content in our
pages/index.js file. I'm going to derive the content from the previous example, but we'll create it similar to how we might expect it to come from an API. First, let's add our content as an object above our return statement:
const content = { header: `So, how 'bout them Knicks?`, intro: `What are their names? I'm Santa Claus! This opera's as lousy as it is brilliant! Your lyrics lack subtlety. You can't just have your characters announce how they feel. That makes me feel angry! Good news, everyone! I've taught the toaster to feel love!`, list: [ `Yes! In your face, Gandhi!`, `So I really am important? How I feel when I'm drunk is correct?`, `Who are those horrible orange men?` ]
To display that content, inside
<main>, let's replace the content with:
<main> <h1>{ content.header }</h1> <p>{ content.intro }</p> <ul> { content.list.map((item, i) => { return ( <li key={i}>{ item }</li> ) })} </ul> </main>
And for the styles, you can copy and paste everything from our Part 1
main.css file into the
<style> tags at the bottom of our index page. That will leave us with:
With that, we should be back to a similar point we finished at in Part 1 except we're not actively using any of the loading animations yet.
Follow along with the commit!
Step 2: Faking loading data from an API
The example we're working with is pretty simple. You'd probably see this coming pre-generated statically, but this helps us create a realistic demo that we can test our loading animation with.
To fake our loading state, we're going to use React's
useState,
useEffect, and an old fashioned
setTimeout to preload some "loading" content, and after the
setTimeout finishes, update that content with our actual data. In the meantime, we'll know that we're in a loading state with a separate instance of
useState.
First, we need to import our dependencies. At the top of our
pages/index.js file, add:
import { useState, useEffect } from 'react';
Above our
content object, let's add some state:
const [loadingState, updateLoadingState] = useState(true); const [contentState, updateContentState] = useState({})
And in our content, we can update the instances to use that state:
<h1>{ contentState.header }</h1> <p>{ contentState.intro }</p> <ul> { contentState.list.map((item, i) => { return ( <li key={i}>{ item }</li> ) })} </ul>
Once you save and load that, you'll first notice we get an error because our
list property doesn't exist on our
contentState, so we can first fix that:
{ Array.isArray(contentState.list) && contentState.list.map((item, i) => { return ( <li key={i}>{ item }</li> ) })}
And after that's ready, let's add our
setTimeout inside of a
useEffect hook to simulate our data loading. Add this under our
content object:
useEffect(() => { setTimeout(() => { updateContentState(content); updateLoadingState(false) }, 2000); }, [])
Once you save and open up your browser, you'll notice that for 2 seconds you don't have any content and then it loads in, basically simulating loading that data asynchronously.
Follow along with the commit!
Step 3: Adding our loading animation
Now we can finally add our loading animation. So to do this, we're going to use our loading state we set up using
useState and if the content is loading, add our
Before we do that, instead of individually adding this class to each item in the DOM, it might make more sense to do so using CSS and adding the class to the parent, so let's do that first.
First, update the
.loading h1, .loading p, .loading li { color: transparent; background: linear-gradient(100deg, #eceff1 30%, #f6f7f8 50%, #eceff1 70%); background-size: 400%; animation: loading 1.2s ease-in-out infinite; }
Then we can dynamically add our class to our
<main> tag:
<main className={loadingState ? 'loading' : ''}>
Note: if you use Sass, you can manage your loading styles by extending the
.loading class in the instances you want to use it or create a placeholder and extend that!
And if you refresh the page, you'll notice it's still just a blank page for 2 seconds!
The issue, is when we load our content, nothing exists inside of our tags that can that would allow the line-height of the elements to give it a height.
But we can fix that! Because our
.loading class sets our text to transparent, we can simply add the word
const [contentState, updateContentState] = useState({ header: 'Loading', intro: 'Loading', list: [ 'Loading', 'Loading', 'Loading' ] })
Note: We can't use an empty space here because that alone won't provide us with a height when rendered in the DOM.
And once you save and reload the page, our first 2 seconds will have a loading state that reflects our content!
Follow along with the commit!
Some additional thoughts
This technique can be used pretty broadly. Being a CSS class makes it nice and easy to add where every you want.
If you're not a fan of setting the
Loading text for the loading state, another option is to set a fixed height. The only issue with that is it requires more maintenance for tweaking the CSS to match what the content loading in will look like.
Additionally, this won't be perfect. More often than not, you won't know exactly how much copy you have on a page. The goal is to simulate and hint that there will be content and that it's currently loading.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/colbyfayock/how-to-use-pure-css-to-create-a-beautiful-loading-animation-for-your-app-5ece | CC-MAIN-2021-49 | refinedweb | 2,026 | 62.78 |
Subject: [boost] Update for the Cxx dual library
From: Edward Diener (eldiener_at_[hidden])
Date: 2016-07-22 11:13:53
The Cxx dual library, or CXXD for short, is a C++ macro header-only
library which chooses between using a Boost library or its C++ standard
equivalent library for a number of different C++ implementations, while
using the same code to program either choice. An 'implementation',
called a 'dual library', is a Boost library which has a C++ standard
library equivalent whose public interfaces are nearly the same in both
cases.
I have updated the Cxx dual library, for those who prefer to minimize
their use of macros in source code, by providing an alternate method to
the use of macros for including the appropriate header files and
specifying the appropriate namespace for a dual library. In this
alternate method including the appropriate header files is done by
including an implementation header file, and specifying the appropriate
namespace is done by using a C++ namespace alias. The alternate method,
called the alias mode, is fully documented.
You can get the latest version of the Cxx dual library at. Instructions for using the library
are in the README.md file. PDF documentation is available in
cxx_dual.pdf, HTML doc is availavble online at and you can generate local HTML
documentation using the doc jam file.
Questions, comments, bug reports are always welcome.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2016/07/230462.php | CC-MAIN-2020-34 | refinedweb | 253 | 54.12 |
I use Pi 3 platform with Pi4J. For my project I need to take control of SP5055 chip - is synthesiser for TV tuners.
In my case it synthesiser in Lawmate RX 1.2Ghz (analog video receiver).
So I try to communicate with sp5055 by i2c. According datasheet, reading and writing carried by different addresses.
To read I need set last bit in "1" and to write "0".
My Pi3 detect it device like two different devices: 0x61 and 0x63.
But is wrong addresses by my calculation...
On P3 I have 0.67 volt. By datasheet it mine that my address for write is 11000100 (0xC4) and for read 11000101 (0xC5).
This my code each I try
I found example of this on C with Tiny library and after that on Python. But them use Start and Stop command and simple put bytes one by one. On each example them use different addresses.
Code: Select all
public class SP5055 { static I2CDevice device; public static void main(String args[]) throws Exception { I2CBus bus = I2CFactory.getInstance(I2CBus.BUS_1) device = bus.getDevice (0x63); // I try 0x61 not help, 0xC4 give error Thread.sleep(1000); Scanner scan = new Scanner(System.in); while(true){ System.out.println(Enter needed frequency in Mhz ); String inStr = scan.next(); if(inStr.equals("exit")) break; long chanF = Long.parseLong (inStr); chanF=chanF*1000000; long divider = chanF/125000; //calculate divider according local oscillator (in my case 4MHz). byte divLSB = (byte) divider; byte divMSB = (byte)(divider >> 8); divMSB = (byte)(divMSB & 0x7F); //first bit must to be "0" byte[] data = { divMSB, divLSB, (byte)0x8E, (byte)0x00 }; //here I try to put first byte before divMSB 0xC4 or 0x63, not help device.write(data); } } }
Please, if some body know how to do it, help me.
Thank you! | https://www.raspberrypi.org/forums/viewtopic.php?t=212972&p=1312611 | CC-MAIN-2019-04 | refinedweb | 293 | 68.77 |
Red Hat Bugzilla – Bug 972292
lgetxattrs can't show the file attribute list with ntfs FS in rhel7
Last modified: 2014-04-25 05:21:14 EDT
Description of problem:
lgetxattrs can't show the file attribute list with ntfs FS though execute no error
Version-Release number of selected component (if applicable):
libguestfs-1.22.2-1.el7.x86_64
How reproducible:
100%
Steps to Reproduce:
# guestfish -N fs:ntfs
Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
><fs> trace 1
><fs> mount-options user_xattr /dev/sda1 /
libguestfs: trace: mount_options "user_xattr" "/dev/sda1" "/"
libguestfs: trace: mount_options = 0
><fs> touch /test.txt
libguestfs: trace: touch "/test.txt"
libguestfs: trace: touch = 0
><fs> lsetxattr security.name "hello" 5 /test.txt
libguestfs: trace: lsetxattr "security.name" "hello" 5 "/test.txt"
libguestfs: trace: lsetxattr = 0
><fs> lsetxattr security.type "ascii file" 10 /test.txt
libguestfs: trace: lsetxattr "security.type" "ascii file" 10 "/test.txt"
libguestfs: trace: lsetxattr = 0
><fs> lgetxattrs /test.txt
libguestfs: trace: lgetxattrs "/test.txt"
libguestfs: trace: lgetxattrs = <struct guestfs_xattr_list *>
><fs>
Actual results:
lgetxattrs don't have output
Expected results:
lgetxattrs can show the attribute list
><fs> lgetxattrs /test.txt
[0] = {
attrname: security.name
attrval: hello
}
[1] = {
attrname: security.type
attrval: ascii file
}
Additional info:
1. lgetxattrs work with ext FS
2. has same issue with libguestfs-1.20.8-4.el6.x86_64 in rhel6
I'm pretty sure I've seen the same bug in ntfs-3g itself.
The problem was that ntfs-3g wouldn't return all the xattrs
when you use listxattr(2).
Should we expect that security.* xattrs can be set arbitrarily?
The security.* namespace is reserved by kernel security modules.
From attr(5):‐
bility.
So the fact this worked for ext4 is just luck.
If you use the user.* namespace instead, then everything works
fine even on NTFS:
$ guestfish -N fs:ntfs -m /dev/sda1:/:user_xattr <<EOF
touch /test.txt
lsetxattr user.name "hello" 5 /test.txt
lsetxattr user.type "ascii file" 10 /test.txt
lgetxattr /test.txt user.name
echo
lgetxattrs /test.txt
EOF
hello
[0] = {
attrname: user.name
attrval: hello
}
[1] = {
attrname: user.type
attrval: ascii file
}
So I would say this is not a bug.
I looked at the description again, and it's not expected that
you should be able to set arbitrary security.* xattrs. That
namespace is reserved for the kernel. Try setting user.* xattrs
instead -- those should work. | https://bugzilla.redhat.com/show_bug.cgi?id=972292 | CC-MAIN-2018-26 | refinedweb | 420 | 54.29 |
Re: wcwidth of soft hyphen
Hi, Martijn van Duren wrote on Thu, Apr 01, 2021 at 09:30:36AM +0200: > When it comes to these discussions I prefer to go back to the standards I would propose an even more rigorous stance: not only go back to the standards, but use whatever the Unicode data files (indirectly, via the Perl
Re: mandoc: -Tlint: search /usr/local/man as well
Hi, Klemens Nanni wrote on Sun, Apr 04, 2021 at 03:54:43PM +0200: > On Sun, Apr 04, 2021 at 03:42:03PM +0200, Tim van der Molen wrote: >> Doesn't mandoc -Tlint -Wstyle do what you want? > I don't think so. Oddly enough, `-Wstyle' does not do what I would > expect: it omits STYLE messages
Re: Patch for crypt(3) man page.
Hi, this page is a mess. It is full of unclear wordings, in some cases verging incorrect statements. At the same time, parts of it are wordy. Here is an attempt to start fixing it. I refrained from trying to explain $2a$ (as suggested by sthen@) or to document the missing bcrypt_gensalt(3) in
Re: man: help pagers recognise HTML files as such
Hi Klemens, Klemens Nanni wrote on Sat, Jan 16, 2021 at 10:31:49AM +0100: > On rare occasions I'd like to use the following idiom to read manuals in > browsers, mostly to better readability and navigation of long sections: > > MANPAGER=netsurf-gtk3 man -Thtml jq > > (jq(1) has lots of
Re: syspatch exit state
Hi Antoine, Antoine Jacoutot wrote on Mon, Dec 07, 2020 at 01:39:30PM +0100: > On Mon, Dec 07, 2020 at 01:30:55PM +0100, Ingo Schwarze wrote: >> Antoine Jacoutot wrote on Mon, Dec 07, 2020 at 01:01:27PM +0100: >>> I just tested this change and it seems to work: [...] >>
Re: syspatch exit state
Hello Antoine, Antoine Jacoutot wrote on Mon, Dec 07, 2020 at 01:01:27PM +0100: > I just tested this change and it seems to work: I did not repeat my testing, but here is some quick feedback purely from code inspection: The proposed code change makes sense to me. The proposed manual page text
Re: syspatch exit state
Hi Antoine, Antoine Jacoutot wrote on Mon, Dec 07, 2020 at 09:48:36AM +0100: > On Sun, Dec 06, 2020 at 10:52:37PM +0100, Alexander Hall wrote: >> On December 6, 2020 8:13:26 PM GMT+01:00, Antoine Jacoutot wrote: >>> On Sun, Dec 06, 2020 at 05:20:31PM +, Stuart Henderson wrote: On
Re: sio_open.3: clarify what sio_start() does
Hi Alexandre, Alexandre Ratchov wrote on Fri, Nov 27, 2020 at 07:05:27PM +0100: > this wording is shorter and more precise and complete. This looks good mdoc(7)-wise, so go ahead, but please consider the two nits below when committing. Yours, Ingo > Index: sio_open.3 >
Re: AUDIORECDEVICE environment variable in sndio lib
Hi Solene, sorry if i misunderstand because i did not fully follow the thread... Solene Rapenne wrote on Tue, Nov 17, 2020 at 07:36:38PM +0100: > I added the new variable after MIDIDEVICE in the ENVIRONMENT section > to keep order of appearance in the document. Usually, we order the
Re: Import seq(1) from FreeBSD
contain UTF-8 and it just works. No need to call setlocale(3) or inspect LC_*. Yours, Ingo Todd C. Miller wrote on Mon, Nov 16, 2020 at 10:08:08AM -0700: > On Mon, 16 Nov 2020 16:14:31 +0100, Ingo Schwarze wrote: >> are you really sure this is a good idea? The version you sent is
Re: Import seq(1) from FreeBSD
Hi Todd, are you really sure this is a good idea? The version you sent is wildly incompatible with GNU sed. So we add a non-standard utility that exhibits different behaviour on different systems even though a standard utility already exists for the purpose? Is this needed for porting work?
Re: accton(8) requires a reboot after being enabled
Hi Jason, Jason McIntyre wrote on Mon, Nov 02, 2020 at 05:29:37PM +: > - adding EXIT STATUS makes sense. i agree. So i added just the .Sh and .Ex lines. All the rest (both regarding "file" and "install") seems controversial and hardly worth have a long discussion, so i dropped all the
Re: accton(8) requires a reboot after being enabled
Hi Theo, Theo de Raadt wrote on Fri, Oct 30, 2020 at 12:10:41PM -0600: > Yes, that diff is a whole bunch of TOCTUO. > > If this was going to be changed, it should be in the kernel. > > But I don't know if it should be changed yet, which is why I asked > a bunch of questions. > > Stepping back
Re: accton(8) requires a reboot after being enabled
Hi Solene, Solene Rapenne wrote on Fri, Oct 30, 2020 at 06:34:09PM +0100: > Following diff changes accton(8) behavior. > > If the file given as parameter doesn't exists, accton will create it. > Then it will check the ownership and will make it root:wheel if > it's different. > > I added a
Re: accton(8) requires a reboot after being enabled
Hi Theo, Theo de Raadt wrote on Fri, Oct 30, 2020 at 09:59:09AM -0600: > With a careful reading of the current manual page, everything is there > and it is accurate. > > With an argument naming an existing file > > > Ok so let's create it with touch.
Re: [PATCH] Fix link in Porting Guide
Hi Martin, Martin Vahlensieck wrote on Wed, Oct 28, 2020 at 09:02:21PM +0100: > This refers to the libc function. Committed, thanks. > P.S.: I noticed that e.g. sysmerge(8) is mentioned like this but not a > link. Is this intentional? Probably not, i added a few more links while there.
Re: [PATCH] Mention unsupported stacking in softraid(4)
Hi Jeremie and Filippo, Jeremie Courreges-Anglas wrote on Sun, Oct 25, 2020 at 03:05:04PM +0100: > On Sun, Oct 25 2020, "Filippo Valsorda" wrote: >> Based on the text in faq14.html, but using the manpage language. > Makes sense. I'm not sure .Em is useful here, though. Indeed. We use .Em
Re: diff: fixing a few broken links on 68.html, and other minor things
Hi, Andras Farkas wrote on Wed, Oct 21, 2020 at 09:19:43PM -0400: > While reading 68.html I noticed some of the man page links pointed to > the man pages in the wrong section of the manual. (at least, given the > manual section numbers listed next to them in the 68.html text) > I decided to fix
Re: libexec/security: don't prune mount points
Hi Todd, Todd C. Miller wrote on Wed, Oct 07, 2020 at 09:36:33AM -0600: > The recent changes to the daily security script will result in it > not traversing file systems where the parent mount point is mounted > with options nodev,nosuid but the child is mounted with setuid > enabled. > > For
Re: Make df output more human friendly in daily(8)
Hi Daniel, my OK still stands, except that you went too far in one respect in the manual page, see below. Yours, Ingo Daniel Jakots wrote on Fri, Oct 02, 2020 at 03:41:31PM -0400: > Index: share/man/man8/daily.8 > === > RCS
Re: Make df output more human friendly in daily(8)
Hi Daniel, Daniel Jakots wrote on Fri, Oct 02, 2020 at 02:16:57PM -0400: > On Fri, 2 Oct 2020 19:55:53 +0200, Ingo Schwarze wrote: >> Daniel Jakots wrote on Thu, Oct 01, 2020 at 10:32:31PM -0400: >>> Currently daily(8) runs `df -ikl`. >> By default, it does not. It o
Re: Make df output more human friendly in daily(8)
Hi, Daniel Jakots wrote on Thu, Oct 01, 2020 at 10:32:31PM -0400: > Currently daily(8) runs `df -ikl`. By default, it does not. It only does that if you set VERBOSESTATUS. I would prefer deleting the VERBOSESTATUS parts completely, strictly enforcing the principle "daily(8) only produces
Re: drop support for afs, nnpfs, and procfs from security(8)
Hi Todd, Todd C. Miller wrote on Wed, Sep 16, 2020 at 01:36:09PM -0600: > On Wed, 16 Sep 2020 18:17:36 +0200, Ingo Schwarze wrote: >> Does anyone think that explicitely excluding these file system >> types might still be useful, or is the following simplification >> OK?
drop support for afs, nnpfs, and procfs from security(8)
Hi, by chance, i noticed that security(8) is careful to avoid scanning filesystems of the types "afs", "nnpfs", and "procfs". According to "ls /sbin/mount*", no such file systems are supported, and the only page "man -ak any=afs any=nnpfs any=procfs" brings up seems to be sshd_config(5) talking
Re: route.8, remove unprinted text
Hi Denis, Denis Fondras wrote on Thu, Sep 10, 2020 at 10:09:14PM +0200: > I can't see where these two lines are printed. Not OK, please do not commit that diff. You are correct that these two macros produce no output to the terminal, but they do produce output to the ctags(1) file and they
Re: exFAT support
Hi, in addition to what Bryan said... This message is wildly off-topic on tech@. If you reply, please reply to misc@. Quoting from (please read that!): Developer lists: [...] tech@openbsd.org Discussion of technical topics for OpenBSD developers and
Re: ksh [emacs.c] -- simplify isu8cont()
Hi, Martijn van Duren wrote on Sat, Jul 25, 2020 at 09:54:53PM +0200: > This function is used throughout the OpenBSD tree and I think it's > fine as it is. This way it's clearer to me that it's about byte > 7 and 8 and not have to do the math in my head to check if we > might have botched it. >
Re: join(1) remove redundant memory copy
Hi Martijn, this is a nice simplification, OK schwarze@. A few nits: * The MAXIMUM() macro is now unused, so i prefer that you delete the definition. * The second getline(3) argument should be size_t, not u_long, so change that in the struct declaration (it's not used anywhere else).
Re: switch default MANPAGER from more(1) to less(1)
Hi, ropers wrote on Mon, Jul 20, 2020 at 05:54:46AM +0100: > Ah, I see where you're coming from, Ingo. You've dropped the idea of > testing for less(1) in non-portable mandoc because we know less(1) is > in base.[1] Configuration testing is never needed in a base system. It may sometimes
Re: switch default MANPAGER from more(1) to less(1)
Hi Jason & Theo, thanks for the feedback! Jason McIntyre wrote on Sun, Jul 19, 2020 at 05:02:02PM +0100: > i guess the argument in favour of more(1) would be that it is part of > posix, even if optional, where less(1) is not. so it makes sense to > choose a command most likely to work on most
switch default MANPAGER from more(1) to less(1)
Hi, currently, if neither the MANPAGER nor the PAGER environment variable is set, man(1) uses "more -s" as the manual page pager. I am quite sure that the only reason i did this is that i thought this behaviour was required by POSIX. But it is not:
Re: LC_MESSAGES in xargs(1)
; > Minor nits inline, but either way OK martijn@ Thanks to all three of you for checking, i just committed the patch. > On Thu, 2020-07-16 at 21:49 +0200, Ingo Schwarze wrote: [...] >> * There is no need to put a marker "return value ignored on >>purpose" wher
Re: LC_MESSAGES in xargs(1)
Hi Jan, Jan Stary wrote on Thu, Jul 16, 2020 at 09:45:44AM +0200: > Does xargs need to set LC_MESSAGES? As it stands, your patch doesn't make much sense. It is true that it doesn't change behaviour, but it leaves more cruft behund than it tidies up. That said, i agree that we will never
Re: disable libc sys wrappers?
Hi Theo, Theo de Raadt wrote on Fri, Jul 10, 2020 at 10:02:46AM -0600: > I also don't see the point of the leading _ > > Where does that come from? > > This isn't a function namespace. What does it signify, considering > no other environment variable uses _ prefixing. $ man -kO Ev Ev~^_ |
Re: disable libc sys wrappers?
Hi Mark, Mark Kettenis wrote on Thu, Jul 09, 2020 at 07:59:25PM +0200: > Ingo Schwrze wrote: >> Now that i look at that, and at what you said previously, is it even >> plausible that some user ever wants "-t c" ktracing but does >> specifically *not* want to see clock_gettime(2) and friends?
Re: disable libc sys wrappers?
Hi Theo, Theo de Raadt wrote on Thu, Jul 09, 2020 at 11:08:38AM -0600: > Ingo Schwarze wrote: >> So, what about >> LD_KTRACE_GETTIME >> or a similar environment variable name? > That naming seems logical. > > If it is mostly hidden behind a ktrace flag (-T ?
Re: disable libc sys wrappers?
Hi Theo, Theo de Raadt wrote on Thu, Jul 09, 2020 at 09:47:26AM -0600: > This time, they become invisible. [...] > There are many circumstances where ltrace is not suitable. [...] > We fiddle with programs all the time, to inspect them. Fair enough, then. The following variables already exist
Re: disable libc sys wrappers?
Hi, i wonder whether there is a layering violation going on here. >From the user perspective, whether an API call is a syscall or a mere library function doesn't make any difference, it's an implementation detail. API calls have often moved from being library functions to syscall and vice
rewrite printf(3) manual page
Hi, the following is the execution of an idea brought up by deraadt@. There are four problems with the printf(3) manual page: 1. While some modifiers (like "argno$" and "width") do the same for all conversion specifiers, the effects of some other modifiers varies wildly. For example,
Re: snmp(1) initial UTF-8 support
Hi Martijn, sorry for the delay, now i finally looked at the function smi_displayhint_os() from the line "if (MB_CUR_MAX > 1) {" to the end of the corresponding else clause. IIUC, what that code is trying to do is iterate the input buffer "const char *buf" of length "size_t buflen". Before
Re: Add man parameter for HTML pager
Hi Abel, i committed a tweaked version of your patch with these changes: - pass pointers to structs around, don't pass structs by value - no need for an additional automatic variable for the URI - no need for comments similar to: i++; /* increment i */ - only use file:// when -O tag was
Re: Add man parameter for HTML pager
Hi Abel, Romero Perez, Abel wrote on Mon, Jun 15, 2020 at 03:06:26AM +0200: > Romero Perez, Abel wrote: >> I tried to view the manuals in HTML format with lynx, but I couldn't I assume what you mean is that you could, but the "-O tag" option had no effect. >> because lynx and links can't
Re: snmp(1) initial UTF-8 support
Hi Martijn, Martijn van Duren wrote on Tue, May 19, 2020 at 10:25:37PM +0200: > So according to RFC2579 an octetstring can contain UTF-8 characters if > so described in the DISPLAY-HINT. One of the main consumers of this is > SnmpAdminString, which is used quite a lot. > > Now that we trimmed
Re: locale thread support
to avoid such issues. To make support desirable, it might be sufficient that xlocale.h is one reasonable way and that some amounts of useful software actually use it. Yours, Ingo Marc Espie wrote on Thu, May 28, 2020 at 05:52:12PM +0200: > On Tue, May 26, 2020 at 12:13:02AM +0200, Ingo Schwa
Re: Enable building wsmoused on arm64 and armv7
Hi, Frederic Cambus wrote on Thu, May 28, 2020 at 10:44:59PM +0200: > On Thu, May 28, 2020 at 10:52:44AM -0600, Theo de Raadt wrote: >> -MANSUBDIR= i386 amd64 alpha >> +MANSUBDIR= i386 amd64 arm64 armv7 alpha >> >> Actually, I suggest making this a MI man page. Delete that line, >> and
Re: locale thread support
Hi, my impression is there are two totally independent topics in this thread. 1. Whether and how to make things safer, ideally terminating the program in a controlled manner, if it uses functions that are not thread-safe in an invalid manner. I'm not addressing that topic in this
Re: Fix manpage links in upgrade67.html
Hi Andre, Andre Stoebe wrote on Sat, May 23, 2020 at 06:10:36PM +0200: > following patch fixes two manpage links that point to the wrong section. Committed, thanks. Ingo > Index: faq/upgrade67.html > === > RCS file:
Re: vi.beginner diff and analysis (vi.advanced to follow)
Hi Andras, Andras Farkas wrote on Sun, May 24, 2020 at 01:27:18AM -0400: > I went through vi.beginner. Thanks. > It works both with vi's regular settings, > and with the settings applied via EXINIT in vi.tut.csh. Good. > I have a diff attached. I was mostly light and gentle with my >
Re: Add CAVEATS to ldom.conf(5) man page
Hi Jason, Jason McIntyre wrote on Wed, May 20, 2020 at 06:38:18AM +0100: > i'm fine with the text, but does it really warrant a CAVEATS section? it > sounds like it should just be part of "this is how it works" text. I may not be the best person to judge this, but as far as i understand, this
Re: Add CAVEATS to ldom.conf(5) man page
Hi Kurt, Kurt Mosiejczuk wrote on Tue, May 19, 2020 at 07:49:56PM -0400: > On Mon, May 18, 2020 at 08:12:00PM +0200, Klemens Nanni wrote: >> On Mon, May 18, 2020 at 01:20:07PM -0400, Kurt Mosiejczuk wrote: >>> Learning how LDOMs work on this T4-1 and we only create 8 devices >>> (each /dev/ldom*
Re: ddb(4): missing tags
Hi Anton, Anton Lindqvist wrote on Sun, May 17, 2020 at 10:56:18AM +0200: > The ddb(4) manual documents a couple of commands which can be > abbreviated. The diff below adds explicit tags for such commands which > in turn makes it possible to jump to for instance `examine' from within > your
Re: WireGuard patchset for OpenBSD
Hi Matt, Matt Dunwoodie wrote on Wed, May 13, 2020 at 01:56:51AM +1000: > On Tue, 12 May 2020 17:36:15 +0200 > Ingo Schwarze wrote: >> I feel somewhat concerned that you recommend the openssl(1) command >> for production use. As far as i understand, the LibreSSL developers &
Re: WireGuard patchset for OpenBSD
Hi Matt, again, documentation is not critical for the initial commit, but why not provide feedback right away. As we already have an ifconfig(8) manual page, i decided to simply send an updated version of the ifconfig.8 part of the diff because sending around diffs of diffs feels awkward, and
Re: WireGuard patchset for OpenBSD
Hi Matt, thanks for doing all this work. Note that i cannot provide feedback on the code or concepts, and also note that when the code is ready, a developer can commit it even if there are still issues to sort out with the documentation. We do value good documentation, but the exact point in
Re: Broken links to the usb.org document library
Hi, clematis wrote on Tue, May 12, 2020 at 03:06:40AM +0200: > - Should we update those links? For the manual pages, that seems clear: if some document is worth linking to (which i assume it is unless told otherwise, if a link is currently present in a manual page), then we should provide the
Re: Diff for www:mail
Salut Stephane, b...@stephane-huc.net wrote on Mon, May 04, 2020 at 03:51:33PM +0200: > Here a diff for www page: mail > Please, review this diff to add French ML Committed. Merci, Ingo > Index: mail.html > === > RCS file:
Re: Mention /etc/examples/ in those config files manpages + FILES short description
Hi, clematis wrote on Fri, May 01, 2020 at 11:37:51AM +0200: > On Fri, May 01, 2020 at 06:52:27AM +0100, Jason McIntyre wrote: >> i don;t understand your reference to 28 out of 41 files. i cannot see >> where we added any expanded FILES entries. can you provide a summary of >> these
Re: [PATCH] printf: Add %B conversion specifier for binary
Hi, Alejandro Colomar wrote on Mon, Apr 27, 2020 at 08:26:38PM +0200: > This patch adds a new feature to the ``printf`` family of functions: > ``%B`` conversion specifier for printing unsigned numbers in binary. No. We do not want non-standard extensions to standard functions unless they
Re: Update /etc/examples/doas.conf and doas.conf(5)
Hi, clematis wrote on Sat, Apr 25, 2020 at 11:34:14PM +0200: > But then based on your feedback I wonder if there's even any > value of an example file which gives not much more than an already > well documented manpage? We had that discussion among developers at least twice in the past. If i
Re: Update /etc/examples/doas.conf and doas.conf(5)
Hi, i strongly dislike what you are doing here. Not just because of one particular line, but in general. clematis wrote on Sat, Apr 25, 2020 at 07:41:40PM +0200: > Looking arround in /etc/examples/ I felt like some of those files > aren't as "verbose" as others. The shorter the better. Yes,
Re: www/advisories unreferenced files - should they go to the attic?
Hi Theo, Theo de Raadt wrote on Fri, Apr 24, 2020 at 10:57:23AM -0600: > Somewhere along the line, the web pages were changed to no longer > reference those pages, and that is a shame. > > It is a shame that the old errata pages don't point at those files. > > They aren't quite in the same
Re: [PATCH] [src] usr.bin/audioctl/audioctl.8, usr.bin/mixerctl/mixerctl.8 - manpages moved to section 8, mark them as such
Hi Raf, Raf Czlonka wrote on Thu, Apr 23, 2020 at 12:58:41AM +0100: > Recently moved manpages bear section 1 number - update accordingly. Committed, thanks. Ingo > Index: usr.bin/audioctl/audioctl.8 > === > RCS file:
Re: Update www/support.html
Hi, clematis wrote on Sat, Apr 18, 2020 at 11:26:51AM +0200: > I went through the sites listed on support.html to check if they were > still valid. Caught a few that needs removal or update. Thank you for doing that work! > Please find the details below and the changes in the diff attached. >
Re: implement locale(1) charmap argument
Hi Stefan and Todd, Stefan Sperling wrote on Fri, Apr 17, 2020 at 08:55:29AM +0200: > On Thu, Apr 16, 2020 at 09:35:18PM +0200, Ingo Schwarze wrote: >>$ locale -m >> UTF-8 >>$ locale charmap >> UTF-8 >>$ LC_ALL=C locale charmap >> US-AS
implement locale(1) charmap argument
dex: locale.1 === RCS file: /cvs/src/usr.bin/locale/locale.1,v retrieving revision 1.7 diff -u -p -r1.7 locale.1 --- locale.126 Oct 2016 01:00:27 - 1.7 +++ locale.116 Apr 2020 19:04:25 - @@ -1,6 +1,6 @@ .\
Re: Update www/groups.html
Hi, clematis wrote on Thu, Apr 16, 2020 at 12:53:19PM +0200: > Hello, > Here's a few suggestions to update www/groups.html > I was checking if all those links were still existing/active/valid. Thanks, both for doing the checks and providing details about what exactly you did. > Does anyone has
Re: [patch]: Change kern_unveil to [] array derefs
Hi Martin, Martin Vahlensieck wrote on Sat, Apr 04, 2020 at 07:16:46PM +0200: > This makes these array derefs consistent with the others in the file. > Also I believe this is the preferred way to do this. That depends. In mandoc, i certainly prefer "pointer + offset" over "[offest]", arguing
Re: [patch] mandoc: Remove argument names from function prototypes
Hi Martin, Martin Vahlensieck wrote on Thu, Apr 02, 2020 at 10:57:04AM +0200: > I think these are superfluous. Correct, and it is irritating to have a general style of not using argument names in prototypes in mandoc, but then a few scattered names here and there, so i committed your patch.
Re: bug? in getopt(3) + [PATCH] + testcase
Hi, 0xef967...@gmail.com wrote on Wed, Mar 25, 2020 at 05:58:22PM +0200: > To wrap this up and get if off my plate, here is the updated patch, [...] > --- getopt-long.c~2020-03-12 02:23:29.028903616 +0200 > +++ getopt-long.c 2020-03-15 23:46:07.988119523 +0200 > @@ -418,15 +418,7 @@ >
Re: [patch] Tweak libssl manpages
Hi Martin, Martin wrote on Sun, Mar 29, 2020 at 10:22:15PM +0200: > It seems these are just a coded form for no return value, > unless this is some libssl slang I am not aware of. I don't think "does not provide diagnostic information" has any special meaning in libssl. Also, the word
Re: [patch] Remove "do not return a value" from libcrypto/libssl manpages
Hi Martin, Martin Vahlensieck wrote on Sun, Mar 29, 2020 at 01:51:58AM +0100: > I found some more. Thanks, committed, also including in lh_stats(3). Ingo > Index: libcrypto/man/RC4.3 > === > RCS file:
Re: [patch] ERR_print_errors.3
Hi Martin, thanks for reporting the issue in the manual page. Martin Vahlensieck wrote on Sat, Mar 28, 2020 at 09:06:54PM +0100: > Unless I miss something ERR_print_errors_cb returns no value as well. Actually, i committed about the opposite, for the reasons explained in the commit message.
Re: find.1: Markup primaries with Fl not Cm for easier tags
Hi Klemens, Klemens Nanni wrote on Sat, Mar 21, 2020 at 12:49:20AM +0100: > On Fri, Mar 20, 2020 at 04:53:19PM +0100, Ingo Schwarze wrote: >> I don't feel very strongly about that, but i think .Cm does make >> slightly more sense for primaries than .Fl (and .Ic is probably >
Re: find.1: Markup primaries with Fl not Cm for easier tags
Hi Klemens, Klemens Nanni wrote on Fri, Mar 20, 2020 at 12:12:39AM +0100: > In both command line usage and manual output format, find's options > and primaries behave the same, Not really. In the POSIX sense, options are indeed options while primaries are arguments. That has implications, for
Re: bt.5: Fix time() description
Hi Martin, hi Klemens, Martin Pieuchot wrote on Wed, Mar 18, 2020 at 09:06:24PM +0100: > On 18/03/20(Wed) 20:45, Klemens Nanni wrote: >> It takes a format string, e.g. >> >> syscall:sysctl:entry { >> time("%+\n") >> } I can't comment on the content of bt(5). > This is
Re: openssl.1: Tag command names
Hi Klemens, Ingo Schwarze wrote on Tue, Feb 18, 2020 at 04:30:53PM +0100: > While i don't strongly object to the patch, it might be worth holding > off a bit on manually tagging of .Sh given that even automatic > tagging isn't done for that macro yet. Which would mean postponing >
Re: openssl.1: Tag command names
Hi Klemens, Ingo Schwarze wrote on Tue, Feb 18, 2020 at 04:30:53PM +0100: > Klemens Nanni wrote on Mon, Feb 17, 2020 at 05:19:27PM +0100: >> Patch was done with a VIM macro by adding a new line after each `.Sh' >> line with the respective name but lowercased, so no typos in the a
Re: Update en_US.UTF-8 to Unicode 12.1
Hi Andrew, ironically, the patch did not apply for me because you sent it with Content-Type: text/plain; charset=iso-8859-1 so when i saved it to disk, it remained in ISO-Latin-1. In the license comment you are changing, there is a U+00A9 COPYRIGHT SIGN encoded as UTF-8. I suggest you change
Re: openssl.1: Tag command names
Hi, Steffen Nurpmeso wrote on Tue, Feb 18, 2020 at 04:52:48PM +0100: > i just want to add that there is still the mdocmx mdoc macro > extension available, and is working fine for more than half > a decade. I have not ported that to groff 1.22.4, but it is > available for groff 1.22.3. It can
Re: openssl.1: Tag command names
Hi Klemens, Klemens Nanni wrote on Mon, Feb 17, 2020 at 05:19:27PM +0100: > I'd like to commit this soon, it allows me to jump to the command I'm > looking for, e.g. ":tx509" shows me the synopsis right away. > > FWIW, some Linux distributions ship with separate manuals, e.g. x509(1SSL). Yes,
Re: bgpd.conf.5: Tag groups
Hi Klemens, Klemens Nanni wrote on Sun, Feb 16, 2020 at 09:32:52PM +0100: > On Sun, Feb 16, 2020 at 09:25:51PM +0100, Klemens Nanni wrote: >> Going through the example with bgpd.conf(5) side by side, jumping to the >> "group" tag shows >> >> group descr Neighbors in this group will be
Re: dd(1) wording and style
Hi Jan, Jan Stary wrote on Sat, Feb 15, 2020 at 10:51:28AM +0100: > On Feb 14 17:37:27, schwa...@usta.de wrote: >> Jason McIntyre wrote on Fri, Feb 14, 2020 at 07:28:59AM +: >>> On Thu, Feb 13, 2020 at 11:25:07PM +0100, Jan Stary wrote: -.It Cm seek= Ns Ar n +.It Cm seek Ns = Ns Ar
Re: dd bs= supercede ibs= and obs=
Hi, Jan Stary wrote on Sat, Feb 15, 2020 at 11:07:04AM +0100: > On Feb 14 17:04:51, schwa...@usta.de wrote: >> Jason McIntyre wrote on Fri, Feb 14, 2020 at 07:28:59AM +: >>> On Thu, Feb 13, 2020 at 11:25:07PM +0100, Jan Stary wrote: * Fix a factual error in the description of bs: it
Re: extern already declared
Hi, Todd C. Miller wrote on Sun, Feb 09, 2020 at 09:49:35AM -0700: > On Sun, 09 Feb 2020 17:46:51 +0100, Jan Stary wrote: >> Whenever unistd.h declares getopt(3), it also declares >> the extern optind and optarg, so files including unistd.h >> don't need to declare those themselves, right? >
Re: dd(1) wording and style
Hi, Jason McIntyre wrote on Fri, Feb 14, 2020 at 07:28:59AM +: > On Thu, Feb 13, 2020 at 11:25:07PM +0100, Jan Stary wrote: >> -.It Cm seek= Ns Ar n >> +.It Cm seek Ns = Ns Ar n >> Seek >> .Ar n >> blocks from the beginning of the output before copying. >> -On non-tape devices, an >> -.Xr
Re: dd(1) wording and style
Hi, Jason McIntyre wrote on Fri, Feb 14, 2020 at 07:28:59AM +: > On Thu, Feb 13, 2020 at 11:25:07PM +0100, Jan Stary wrote: >> * Fix a factual error in the description of bs: it does not >> supersede ibs/obs, dd will error out when both are specified. > i don;t want to comment on this
Re: [PATCH] [www] books.html - remove superfluous angle bracket
Hi Raf, Raf Czlonka wrote on Fri, Feb 14, 2020 at 02:55:20PM +: > On Mon, Nov 25, 2019 at 11:16:09AM GMT, Raf Czlonka wrote: >> Index: books.html >> === >> RCS file: /cvs/www/books.html,v >> retrieving revision 1.117 >> diff -u
Re: [PATCH] [www] faq/current.html - be consistent with naming of the 'id' attribute
Hi Raf, Raf Czlonka wrote on Fri, Feb 14, 2020 at 01:53:03PM +: > Small inconsistency. > Personally, I prefer id *without* the 'r' but the below is "the odd > one out" so... Oops, sorry. Fixed, thanks for noticing. Ingo > Index: faq/current.html >
Re: setlocale() in cron
Hi, Jan Stary wrote on Mon, Feb 10, 2020 at 01:13:58PM +0100: > Why does cron(8) and crontab(1) need to setlocale()? Committed; thanks to millert@ for cross-checking. Ingo > Index: cron.c > === > RCS file:
Re: don't try to signal with newsyslog -r
Hi, Jan Stary wrote on Mon, Feb 10, 2020 at 05:12:53PM +0100: > The -r option of newsyslog(8) removes the requirement > that newsyslog runs as root. Would it also make sense > to not try to send the SIGHUP to syslogd in that case? While i'm not sure that i want to take care of this patch, given
Re: setlocale() in cron
Hi, Jan Stary wrote on Mon, Feb 10, 2020 at 01:13:58PM +0100: > Why does cron(8) and crontab(1) need to setlocale()? I looked through the *.c files in /usr/src/usr.sbin/cron/ and found the following locale-dependent functions: atrun.c: isalpha(3), isupper(3) cron.c: strtod(3)
Re: Add note about example dhclient.conf
Hi Kyle, Kyle Isom wrote on Mon, Feb 10, 2020 at 07:34:25AM -0800: > On Sat, Feb 8, 2020, at 14:15, Jason McIntyre wrote: >> - i'm ok with adding the path to these files to a FILES section > Done. I already committed a comprehensive diff doing that in a simpler way earlier today:
Re: locate.updatedb TMPDIR
Hi Joerg, this is absolutely not OK. How did you test this? $ doas cat /var/log/weekly.part /etc/weekly[79]: no closing quote With that fixed, i agree with the direction of the change. Yours, Ingo Joerg Jung wrote on Sun, Feb 09, 2020 at 12:33:42AM +0100: > I have a machine with a
Re: locate.updatedb TMPDIR
Hi Todd, Todd C. Miller wrote on Sun, Feb 09, 2020 at 07:52:10AM -0700: > I'm fine with this. I don't really object, but i'm not sure it is needed either. It's certainly obvious that command line arguments override defaults. That's what they always do. It's their whole point, in general.
Re: mention /etc/examples/ in bgpf.conf(5)/bgpd(8)
Hi Jason, Jason McIntyre wrote on Sun, Feb 09, 2020 at 07:49:10AM +: > - bgpd.8 refers to /etc/bgpd.conf. that file doesn;t exist by default. I do not consider that a problem, not even a minor one. ENVIRONMENT says which variables are inspected if they exist. FILES says which files are
Re: mention /etc/examples/ in bgpf.conf(5)/bgpd(8)
Hi Marc, Marc Espie wrote on Sun, Feb 09, 2020 at 02:27:23PM +0100: > I still think it's a good idea to put it in afterboot(8). No more objections, with or without jmc@'s tweaks. It seems clear that enough people want it in that page. Yours, Ingo > Index: afterboot.8 >
Re: mention /etc/examples/ in bgpf.conf(5)/bgpd(8)
Hi, Theo de Raadt wrote on Sat, Feb 08, 2020 at 04:39:42PM -0700: > For complicated configurations, the text could explain the reason the > example is valuable -- for instance > > .It Pa /etc/examples/bgpd.conf > Example configuration file demonstrating IBGP mesh, multiple transits, > RPKI
mention /etc/examples/ in bgpf.conf(5)/bgpd(8)
Hi, Jason McIntyre wrote on Sat, Feb 08, 2020 at 10:15:08PM +: > - i'm ok with adding the path to these files to a FILES section So, here is a specific patch for bgpf.conf(5) and bgpd(8) such that we can agree on a general direction for one case where the example file is particularly
Re: get rid of almost empty /etc/examples/mixerctl.conf
Hi Theo, Theo de Raadt wrote on Sat, Feb 08, 2020 at 03:33:37PM -0700: > Jason McIntyre wrote: >> without getting into a discussion about /etc/examples, in this case i >> personally see neither the point of the example config file (so trivial >> as to be questionable) nor the addition to the
get rid of almost empty /etc/examples/mixerctl.conf
Hi Theo, you have a point, that was a lot of cheap talk and no patch. I don't aim at changing yacc(1) grammars. I think most parts of OpenBSD configuration systems already have sane defaults and most configuration syntaxes are already good with respect to simplicity and usability. At least | https://www.mail-archive.com/search?l=tech%40openbsd.org&q=from:%22Ingo+Schwarze%22&o=newest | CC-MAIN-2021-17 | refinedweb | 5,957 | 69.41 |
Can QSqlDatabase use MySQL?
Hi,
I am very much impressed with the SQL classes and support in Qt. However, I usually work with MySQL Community Edition. Is there any support for that database? I don't see a driver for it...
Thanks,
Juan Dent
- blaisesegbeaya
QT supports MySql community version.
Add to your .pro file: QT += sql # that will allow the SQL library to be loaded.
In your application do the following
#include <QSqlDatabase>
QSqlDatabase mdb; // instantiate a QSqlDatabase class variable
int someFunction()
{
mdb = QSqldatabase::addDatabase("QMYSQL", "MyConnection"); // load the MySql driver
mdb.setHostName(serverAddress);
mdb.setUserName(userName);
mdb.setPassword(passWord);
mdb.setDatabaseName(yourDefaultDatabaseName);
return mdb.open();
}
There is no attempt from me to check for errors. Kindly read the documentation.
Once the database is open, then you can use the high level classes of QT. You will see that the query classes use QSqlDatabase as parameter.
Hope I helped.
- SGaist Lifetime Qt Champion
Hi,
In addition to what @blaisesegbeaya wrote. Here you can find the complete list of SQL drivers supported by Qt.
Note that you need to have the MySQL client libraries installed. Depending on the version of them you have, you may have to rebuild the driver. | https://forum.qt.io/topic/62733/can-qsqldatabase-use-mysql | CC-MAIN-2017-51 | refinedweb | 200 | 61.43 |
I'll rephrase this as an RFC, since I want help and comments.Scenario:I have a driver which accesses a "disk" at the block level, to whichanother driver on another machine is also writing. I want to havean arbitrary FS on this device which can be read from and written tofrom both kernels, and I want support at the block level for this idea.Question:What do people think of adding a "direct" option to mount, with thesemantics that the VFS then makes all opens on files on the FS mounted"direct" use O_DIRECT, which means that file r/w is not cached in VMS,but instead goes straight to and from the real device? Is this enoughor nearly enough for what I have in mind?Rationale:No caching means that each kernel doesn't go off with its own idea ofwhat is on the disk in a file, at least. Dunno about directories andmetadata.Wish:If that mount option looks promising, can somebody make provision forit in the kernel? Details to be ironed out later? What I have explored or will explore:1) I have put shared zoned read/write locks on the remote resource, so eachkernel request locks precisely the "disk" area that it should, inprecisely the mode it should, for precisely the duration of each blocklayer request.2) I have maintained request write order from individual kernels.3) IMO I should also intercept and share the FS superblock lock, but thatsfor later, and please tell me about it. What about dentries? DoesO_DIRECT get rid of them? What happens with mkdir?4) I would LIKE the kernel to emit a "tag request" on the underlyingdevice before and after every atomic FS operation, so that I can maintainFS atomicity at the block level. Please comment. Can somebody make this happen, please? Or do I add the functionality to VFS myself? Where?I have patched the kernel to support mount -o direct, creating MS_DIRECTand MNT_DIRECT flags for the purpose. And it works. But I haven'tdared do too much to the remote FS by way of testing yet. I haveconfirmed that individual file contents can be changed without problemwhen the file size does not change.Comments?Here is the tiny proof of concept patch for VFS that implements the"direct" mount option.PeterThe idea embodied in this patch is that if we get the MS_DIRECT flag whenthe vfs do_mount() is called, we pass it across into the mnt flags usedby do_add_mount() as MNT_DIRECT and thus make it a permament part of thevfsmnt object that is the mounted fs. Then, in the genericdentry_open() call for any file, we examine the flags on the mntparameter and set the O_DIRECT flag on the file pointer if MNT_DIRECTis set on the vfsmnt object.That makes all file opens O_DIRECT on the file system in question,and makes all file accesses uncached by VMS.The patch in itself works fine.--- linux-2.5.31/fs/open.c.pre-o_direct Mon Sep 2 20:36:11 2002+++ linux-2.5.31/fs/open.c Mon Sep 2 17:12:08 2002@@ -643,6 +643,9 @@ if (error) goto cleanup_file; }+ if (mnt->mnt_flags & MNT_DIRECT)+ f->f_flags |= O_DIRECT;+ f->f_ra.ra_pages = inode->i_mapping->backing_dev_info->ra_pages; f->f_dentry = dentry; f->f_vfsmnt = mnt;--- linux-2.5.31/fs/namespace.c.pre-o_direct Mon Sep 2 20:37:39 2002+++ linux-2.5.31/fs/namespace.c Mon Sep 2 17:12:04 2002@@ -201,6 +201,7 @@ { MS_MANDLOCK, ",mand" }, { MS_NOATIME, ",noatime" }, { MS_NODIRATIME, ",nodiratime" },+ { MS_DIRECT, ",direct" }, { 0, NULL } }; static struct proc_fs_info mnt_info[] = {@@ -734,7 +741,9 @@ mnt_flags |= MNT_NODEV; if (flags & MS_NOEXEC) mnt_flags |= MNT_NOEXEC;- flags &= ~(MS_NOSUID|MS_NOEXEC|MS_NODEV);+ if (flags & MS_DIRECT)+ mnt_flags |= MNT_DIRECT;+ flags &= ~(MS_NOSUID|MS_NOEXEC|MS_NODEV|MS_DIRECT); /* ... and get the mountpoint */ retval = path_lookup(dir_name, LOOKUP_FOLLOW, &nd);--- linux-2.5.31/include/linux/mount.h.pre-o_direct Mon Sep 2 20:31:16 2002+++ linux-2.5.31/include/linux/mount.h Mon Sep 2 18:06:14 2002@@ -17,6 +17,7 @@ #define MNT_NOSUID 1 #define MNT_NODEV 2 #define MNT_NOEXEC 4+#define MNT_DIRECT 256 struct vfsmount {--- linux-2.5.31/include/linux/fs.h.pre-o_direct Mon Sep 2 20:32:05 2002+++ linux-2.5.31/include/linux/fs.h Mon Sep 2 18:05:57 2002@@ -104,6 +104,9 @@ #define MS_REMOUNT 32 /* Alter flags of a mounted FS */ #define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */ #define MS_DIRSYNC 128 /* Directory modifications are synchronous */++#define MS_DIRECT 256 /* Make all opens be O_DIRECT */+ #define MS_NOATIME 1024 /* Do not update access times. */ #define MS_NODIRATIME 2048 /* Do not update directory access times */ #define MS_BIND 4096-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/9/3/112 | CC-MAIN-2013-20 | refinedweb | 793 | 54.32 |
This little project was inspired by the need to have a low cost form of communication for a device embedded with the Arduino Pro Mini board. Although I have been particularly impressed with reliable Xbee RF modules in the past, I had planned for the components contained within the device to be permanently sealed for waterproofing. I therefore felt that I needed a more affordable solution, something that I wouldn't be too disappointed if it got damaged. The proposed device is extremely compact and only requires 'line of sight' operation so infra-red seemed like the perfect solution.
Why use Infra-red?
- Much more affordable than other forms of wireless communication e.g. RF
- Only requires single diode component to receive data
- Ideal for projects with limited space available
- Low power consumption
- Control the brightness of an LED using '+' and '-' buttons
- Use the 'play/pause' to switch on/off
- Cycle through modes with 'previous/next' buttons
- Use 'menu' button to reset device
Step 1: Components & Preparation
- Arduino ATMEGA Microcontroller
- Apple IR Remote
- IR Receiver Diode
- 5v 5mm LED
- 220 ohm Resistor
- You'll need to download the 'IRremote.h' library for the Arduino to make sense of those IR signals.
- Click on the link and check out Ken Shirriff's blog whilst downloading the IRremote.zip.
- Extract the file into arduino/hardware/libraries
Other Preparation
Chances are that if you've got an Apple Remote, you probably have a Mac too. To prevent accidentally flicking through Front Row or your iTunes library, you might want to disable your Mac's IR receiver for a while.This is really easy to do. Just open up System Preferences / Security / 'Disable remote control infrared receiver'.
the code compiles fine with me when i press the + - buttons i get the serial monitor to print brightness changes but the led never lights , i tried different pins and the same , i also tested the led and it works fine , what might be the problem ???
C:\Users\Owner\Documents\Arduino\libraries\IRremote/IRremoteInt.h:92: error: 'uint8_t' does not name a type
#include <WProgram.h> so that it correctly reads #include <Arduino.h>
Hope this works out for you :)
I've recently used this diode without changing the arduino sketch and it works just the same...
You'll need this library mentioned in Step 1 :
Ken Sherriff's Blog & IR Remote Library
Sorry, I don't really have any experience with Attiny2313 devices.
Hope this helps a little anyway ;) | http://www.instructables.com/id/Control-Arduino-with-IR-Apple-Remote/?comments=all | CC-MAIN-2014-42 | refinedweb | 413 | 59.53 |
Making Jenkins widget plugin13 Dec 2013
This article was written for Jenkins Advent Calendar 14th day.
Making Jenkins plugin
Have you ever made Jenkins plugin? This is about my experience developing Jenkins plugin. I have not written Java ever, and of course didn’t know how to make maven projects. So while writing plugin codes, I went this way and that, googling and trying code snippets. Then I felt that there were not enough documents for first Jenkins plugin developers. This post however I only made widget plugin, I would be glad that this article will be useful to the same developers as me.
Getting started
First, it is needed to download skelton of Jenkins plugin by using maven plugin. I used maven-3.0.5, and OSX machine.
Write below xml in
~/>
Create skelton.
$ mvn hpi:create
Project name and artifactId will be asked as below.
[INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Stub Project (No POM) 1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-hpi-plugin:1.106:create (default-cli) @ standalone-pom --- Enter the groupId of your plugin [org.jenkins-ci.plugins]: com.lewuathe.plugins Enter the artifactId of your plugin (normally without '-plugin' suffix): mytest
Then under current directory,
mytest project might be made. The directory tree looks like.
% tree mytest mytest ├── pom.xml └── src └── main ├── java │ └── com │ └── lewuathe │ └── plugins │ └── mytest │ └── HelloWorldBuilder.java └── resources ├── com │ └── lewuathe │ └── plugins │ └── mytest │ └── HelloWorldBuilder │ ├── config.jelly │ ├── global.jelly │ ├── help-name.html │ └── help-useFrench.html └── index.jelly 13 directories, 7 files
This sample project uses Builder extension point of Jenkins plugin. Builder extension point is executed inside of the build process. If you have any process run on the way of building, you have to write the class which extends Builder. But in this article, I introduce Widget plugin. If you want to know what kind of extension points exist, please search on this page
Before explaining about Widget plugin, let me say how to run this plugins.
$ mvn hpi:run
With above command, maven can download packages on which this plugin project depends and run Jenkins test server on 8080 port.
# Access below host!
You can look at Jenkins server! There is all you have to do before developing your own Jenkins plugin. Let’s get started.
Write Widget plugin
My Jenkins plugin is put on this repository. With this plugin you can see hacker news top time-line on Jenkins dashboard. From there, it can be downloaded. Screen shot was taken as below.
By examing this plugin’s source code, I want to list up the points I had trouble in completing this plugin.
Correspondence of Jelly file and Java class
In Jenkins, Stapler is used in order to map URL and Java objects.
Stapler is a library that “staples” your application objects to URLs, making it easier to write web applications. The core idea of Stapler is to automatically assign URLs for your objects, creating an intuitive URL hierarchy.
At first, I cannot grasp what that means and practically which objects are mapped to jelly files. In conclusion, this is the correspond directory files.
Please look at
src/main/java and
src/main/resources directory of your project. It looks the same structure, doesn’t it? So, in these directory, the correnspondence between Java class and jelly filesis made.
For example on hckrnews-plugin source tree,
java/com/lewuathe/plugins/hckrnews/HckrnewsWidget.java corresponds to
com/lewuathe/plugins/hckrnews/HckrnewsWidget/.
The former is what model or controller, and the latter is view in the context of MVC.
├── java │ └── com │ └── lewuathe │ └── plugins │ └── hckrnews │ └── HckrnewsWidget.java // <-- This is controller └── resources ├── com │ └── lewuathe │ └── plugins │ └── hckrnews │ └── HckrnewsWidget │ └── index.jelly // <-- This is view which corresponds to above controller, HckrnewsWidget.java └── index.jelly
Calling view method
In
HckrnewsWidget.java, news static class in child class of Widget extension points is written.
@Extension public class HckrnewsWidget extends Widget { // .... public static class News { private String title; private String url; private String points; private String postedBy; public String getTitle() { return this.title; } public String getUrl() { return this.url; } public String getPoints() { return this.points; } public String getPostedBy() { return this.postedBy; } } // .... }
Ordinary getter of News object. So it comes to index.jelly file.
<j:jelly xmlns: <l:pane <tr> <th class="pane">Title</th> <th class="pane">Points</th> <th class="pane">Posted by</th> </tr> <j:forEach <tr> <td align="left" class="pane" style="width:10px;"><a href="${news.url}">${news.title}</a></td> <td class="pane" align="right">${news.points}</td> <td class="pane" align="right">${news.postedBy}</td> </tr> </j:forEach> </l:pane> Last Updated: ${it.lastupdatedstr} </j:jelly>
The detail of Jelly syntax is not explained in this time. In this template file,
it refers to the correspondence object in logic of stapler,
in other words that’s
HckrnewsWidget.java. So when you writes as
it.newslist, this calls
@Extension public class HckrnewsWidget extends Widget { //... public List<News> getNewslist() { return this.newsList; } //... }
Then
index.jelly can get newslist. But you should take care of the casing of method name. If you write
getNewsList, it can be called. Types are below.
- getNewslist : it.newslist is called
- getNewsList : it.newList is called
- getNewslist : it.newsList is not called
So it looks like auto casting to a captal letter is only applied to the first letter. It is necessary to arrange the type of posteror letters.
Making Wiki
It might be rare case which only I experienced. Jenkins wiki page can not be created automatically. Of course, it is taken for granted that writing down your wiki url on pom.xml.
<project xmlns="" xmlns: //... <url></url> //... </project>
But this is not the trigger of creating Jenkins wiki page :( You have to make your own page yourself. I misunderstood about making wiki pages on this point. After you make your own wiki page, let’s add a tag.
After you add tag, your wiki page will be listed on this page. It will be easier to search for users.
After all
This experience taught me many things about plugin development, Jenkins extension points and of course Java !! Conversly if you have no experience of Java, you can also make your own Jenkins plugin about two weeks. And I want to keep making Jenkins plugins when idea comes to me.
Last but not least, write your test codes in your plugin. With maven, you can run JUnit very easily. If there are no test codes in the plugin of continuous integration tool, Jenkins, it sounds like mistaking the means for the end, doesn’t it?
Viva great continuous integration! Thank you. | https://www.lewuathe.com/jenkins/widget/plugin/making-widget-plugin.html | CC-MAIN-2021-49 | refinedweb | 1,090 | 60.51 |
.
What is SyncRoot?
SyncRoot is an internal object a class uses to allow multiple threads to access and share its data. Some classes expose the SyncRoot object so that client code can also gain exclusive access for operations that need to be atomic. For example, to lock an ArrayList, you would use the lock statement on the ArrayList’s SyncRoot property:
ArrayList list = new ArrayList(); lock (list.SyncRoot) { list.Add( "test" ); }
Why Not lock(this)
Some programmers might be tempted to lock the entire array, such as:
ArrayList l = new ArrayList(); lock (l) { l.Add( "test" ); }
However, this makes it very difficult to debug synchronization issues. Whereas if all clients have to lock an object through its SyncRoot property, you can set a breakpoint in the property and see who is trying to lock the object and whether another thread already has a lock.
Create Your Own SyncRoot
You can use any object as your SyncRoot, as long as you use the same one for all clients. Following is a simple example of a class that provides its own SyncRoot:
public class MyIntegerList { private readonly object m_SyncRoot = new object(); private readonly List<int> m_List = new List<int>(); public object SyncRoot { get { return this.m_SyncRoot; } } }
So then clients would lock its SyncRoot:
MyIntegerList list = new MyIntegerList(); lock( list.SyncRoot ) { list.Add( 42 ); }
Your first point is completely meaningless! From a concurrency access standpoint, there’s no difference between locking the object and locking the SyncRoot.
Yes, this is true with ArrayList, which specifically locks the SyncRoot for every method. However, for more generalized classes that may have methods which can operate in “read-only” mode, it’s better to not lock the entire object, which would eliminate the read-only capability. But I understand the confusion from my article, so I updated it. Thanks for your comment.
[…] Grow Your Own SyncRoot – An interesting article on locking – explaining why the SyncRoot property exists […] | https://www.csharp411.com/grow-your-own-syncroot/ | CC-MAIN-2021-43 | refinedweb | 321 | 53.1 |
Thomas Lebrun
Microsoft C# MVP
This.
Since extension methods might be complex to understand, so let’s see a traditional example first. Take a look at this simple program:
Although this works fine, the code is hard to read because it calls a static method which stands in a static class.
To simplify this code, we can use the Extension Methods available with C# 3. Take a look at the same application, rewritten with C# 3 and Extension Methods:
If you execute this application, you will see that the result is the same (the string is returned in uppercase) but the code is more intuitive and comprehensive than the previous version.
Before trying to understand how to implement an Extension Method, let’s use Reflector to take a look at the MSIL that the second example produces:
As you can see, the call to the Extension Method is translated, in IL (Intermediate Language), into a call to a simple static method. What does this mean? Simply that Extension Methods are nothing more than an easier way to call static methods, allowing you to write code that is more intuitive.
We can see that the code for our Extension Method has been translated during compilation into a static method with a specific attribute (ExtensionAttribute) enabling the compiler to understand that this method is, in fact, an Extension Method:
As with static methods, the validity of an Extension Method is tested during compilation. If, when compiling, the Extension Method is not found, you will receive an error message like this one:
“’string’ does not contain a definition for ‘StringToUpper’ and no extension method‘StringToUpper’ accepting a first argument of type ‘string’ could be found”:
As we have seen, Extension Methods are used in the same way as other instance methods. So how can you differentiate an Extension Methods from a “normal” method? Well, Visual Studio 2008 will help you in this task.
Indeed, Intellisense in Visual Studio 2008 has been improved to indicate to developers which kind of methods they are using. Thus, if you use Intellisense to display the list of all the methods and properties available for an object, you should be able to see something like this:
An Extension Method is distinguished by:
· A little blue arrow
· The text of the tooltip, which contains the string “(extension)”
Now that we have seen how Extension Methods work, let’s take a better look at the correct way of using this new feature in your projects.
To understand how to implement an Extension Method, let’s revisit our example:
An Extension Method is defined by several rules:
· The method is defined in a non-generic static class that is not nested.
· The method itself is static.
· The first parameter of an Extension Method is preceded by the modifier this. This parameter is called an “instance parameter” and can only appear as the first parameter of the method.
· No other parameter modifiers (ref, out, etc…) are allowed with the modifier this. As a result, a value type can’t be passed by reference to an Extension Method.
· The instance parameter cannot be a pointer type.
· The method must be public, internal or private: it's a stylistic choice to declare them public, but not a requirement!
· Extension Methods are in a namespace which is in scope.
If your method successfully matches all these points, you can safely say that it’s an Extension Method!
If your Extension Method is in another namespace (or another DLL), you will need a using statement to import the content of this namespace and make the call of your method possible:
LINQ (Language Integrated Query) is a new technology for querying objects, XML and SQL. It uses Extension Methods a lot. If you have already written LINQ code, you may have used these methods without knowing what kind of methods they were:
All the methods shown in this IntelliSense window reside in the namespace “System.Linq”, which is found in the assembly “System.Core.dll”. Take a look at this listing of the System.Linq.Enumerable class created inside Visual Studio 2008 from metadata:
About the author:
Thomas Lebrun currently works as a consultant and trainer at Winwise (). Since July 2007, he’s a Microsoft C# MVP for his work on C# and WPF. You can find his blog at. | http://msdn.microsoft.com/en-us/vcsharp/bb905825.aspx | crawl-002 | refinedweb | 723 | 57.5 |
41678/what-the-meaning-purpose-this-command-java-some-jar-file-jar
java -jar <name>
I think this is used when that particular JAR file is needed as part of the execution of a command.
JAR (Java Archive) file is a package file format that contains many classes, metadata, and resources like image, audio etc., that is necessary for a Java application. A JAR file is a Java application compressed in a single file which contains all the resources required to run a Java application.
The command mentioned in the question is used to run a JAR file. When you have written a Java program, the command to execute it is:
java <name of the java program file>
Similarly, when you want to execute a Java application that is present as a JAR file, you use the below command:
java -jar <some jar file>.jar
Here, you have to replace <some jar file> with the name of the JAR file. Suppose you want to execute Example.jar, then the command would be:
java -jar Example.jar
public class MyThisTest {
private int ...READ MORE
Whenever you require to explore the constructor ...READ MORE
@Override annotation is used when we override ...READ MORE
According to Effective Java, chapter 4, page 73, ...READ MORE
In Java, items with the final modifier cannot be ...READ MORE
If you use String concatenation in a ...READ MORE
Java assertion feature allows the developer to ...READ MORE
In a thread on comp.lang.java.help, Hunter Gratzner ...READ MORE
You have to enter the commands as ...READ MORE
You can use the console class to do so. ...READ MORE
OR | https://www.edureka.co/community/41678/what-the-meaning-purpose-this-command-java-some-jar-file-jar?show=41685 | CC-MAIN-2019-35 | refinedweb | 273 | 67.55 |
I take one look at this project, define my structure.....and my brain goes numb. I don't know where to star.....
///////project///////
Write a program that dynamically allocates an array large enough to hold a user defined number of structures (string name and three double tests). Using a function, the user will enter a name and three test scores for each element in the array. Using another function, the user will sort the array into ascending order by name. Using a third function a table will be displayed on the screen (under a header labeling each column) showing:
Name Test 1 Test 2 Test 3 Average
Then average for each element (three test grades / 3) will be calculated by a fourth function as it is being displayed.
help of any kind is greatly appreciated
#include <iostream> struct grades { char name; double test1; double test2; double test3; }; using namspace std; int main() { system("pause") return 0; } | https://www.daniweb.com/programming/software-development/threads/189696/dynamic-array-with-struct-and-pointers-help | CC-MAIN-2017-47 | refinedweb | 156 | 69.01 |
This is a fork of o3-xml package, with the only difference of allowing higher node version
This is a W3C-DOM XML Library for NodeJS with XPath and namespaces. It is implemented using the C based LibXML2 and the Ajax.org O3 component system. This is the only W3C-DOM standards based XML api for NodeJS we are aware of so your code should work in both the browser and nodejs. This project is used in production in many NodeJS based backend projects for Ajax.org and updated frequently.
To use this library simply clone the repo, and require('node-o3-xml') to return the parser object. This repository is a generated build for node 0.2.2 stable, from the o3 repository ()
Binaries included for:
win32 (through cygwin)
lin32
osx64
Other platforms and bit-ness (32/64) will be added to the automated build VM incrementally. If you need to build this node-o3-xml module yourself or want to contribute to the source, please look at the main o3 repository.
If you are looking for the accompanying binary builds of NodeJS check out the () repository | https://www.npmjs.com/package/o3-xml-fork | CC-MAIN-2016-36 | refinedweb | 188 | 63.9 |
Out of the Angle Brackets
This post continues the “Typed XML programmer” series of blog posts. This time, let’s ponder about ‘the 1st generation of typed XML programming’. What are the imperfections of the 1st generation? (Thanks for your feedback so far. In particular, I loved Bill's thoughts on ‘XML as a true first-class citizen’ and the behavior that is to be associated with such ‘XML in objects’. Bill, do not spoil my story any further; I am getting there!)
Let me recall my simple definition of typed XML programming that I offered previously: “Typed XML programming is XML programming in mainstream OO languages like C#, Java and VB while leveraging XML schemas as part of the XML programming model”. With an eye on what people are doing with XML and objects today, this definition may be talking about both of these two different problems:
In the case of object serialization, XML data and XML types serve no key role in the programming model. Object serialization is a difficult and important topic, but it is not really about “XML programming”. In the case of XML data binding, the XML schema describes the primary data model, and a tool derives an object model to be used by OO programmers to operate on XML data. (I readily admit that there exist scenarios that are neither clear-cut object serialization nor XML data binding, but this post is getting too long anyway.)
So I propose to work with the following definition:
(1st generation of) Typed XML programming
= XML data binding + OO programming
= OO programming on schema-derived object models
Most readers of this post will know the notion of ‘XML data binding’ pretty well, but let me give my own summary. The typical XML data binding technology performs a ‘canonical’ X-to-O mapping, i.e., XML types are systematically mapped to object types without involving domain knowledge of the data model or the programming problem at hand. In simple terms, schema declarations are mapped to OO classes; the structure of each content model is more or less preserved by the associated class with its members corresponding to element and attribute declarations -- modulo imperfections. Here are two XML data binding resources that I want to point out: (i) Bourret’s excellent web site on XML data binding including an impressive list of technologies; (ii) McLaughlin’s (somewhat dated but still valuable) book on (Java-biased) XML data binding.
For illustration, here is an XML schema for purchase orders:
<xs:schema xmlns:
<xs:element
<xs:complexType>
<xs:sequence>
<xs:element
</xs:sequence>
<xs:attribute
<xs:attribute
</xs:complexType>
</xs:element>
<xs:element
<xs:element
<xs:element
</xs:schema>
My favorite XML data binding technology derives the following object model:
public class order {
public item[] item;
public string id;
public int zip;
}
public class item {
public double price;
public int quantity;
In the canonical mapping at hand, repeating particles (cf. maxOccurs="unbounded") are mapped to arrays; XML Schema’s built-in simple types (such as type="xs:double") are mapped to reasonable counterparts of the C#/.NET type system. Element particles are generally mapped to public fields (or to public properties that trivially access private fields).
If such mappings would scale pretty well for all of XML and all of XSD, then typed XML programming (as of today) would be just fine. Typed XML programming would be OO programming where the object models were ‘accidentally’ described in the XSD notation.
The typical XML data binding technology comes with the (potentially implicit) goal of ‘hiding XML from the developers’ and ‘allowing them to work with familiar objects instead’. As we will discuss now, such a goal may be hard to hit. So let me collect some of the pain points that I encountered in typed XML programming of the 1st generation.
It is clear that plain DOM-style XML programming makes it (too) easy to construct invalid content or to disobey the schema constraints in queries and DML code. Unfortunately, (contemporary) schema-derived object models do not make all these problems go away. Let’s consider a trivial example for the schema of purchase orders and its associated object model (both shown above). Your application constructs a purchase order, to be sent, as an XML message, to a supplier. Using the API of the object model for orders, the XML message is constructed using OO idioms of object construction. For instance, we may new up an order item using the idiom of expression-oriented object initialization (as provided by C# 3.0):
new item { id = "23", price = 42 }
The code type-checks fine with regard to the object model shown above, even though the quantity is missing, which however is required by the underlying schema. After sending the message over the wire, suddenly, validation fails or functionality throws on the receiving end. You did not perform separate (sender-site) XSD validation because of performance considerations as well as trust in statically typed OO programming.
The example serves as a placeholder for a class of problems: OO static typing and XML/XSD do not blend very well. Most notably, you need to cope with wildcards, various forms of constraints, simple-type facets, and others. Add to this that ‘insistence on valid content’ may be impractical for certain software architectures that leverage XML.
Data modeling for XML isn’t object modeling … so how could anyone expect that XML data binding gives you object models that look like those that a reasonable OO designer would model in the first place? Consequently, (contemporary) typed XML programmers tend to acknowledge that schema-derived object models are unwieldy to work with.
For instance, how would you model an ‘expression form for addition’? It depends on whether you are doing this for XML or OO (or BNF or EBNF or ASDL or ASN or Haskell or what have you). With one of my XSD hats on (the one that is not afraid of substitution groups), I model the expression form for addition as follows:
<xs:element
<xs:complexContent>
<xs:extension
<xs:sequence>
<xs:element
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:element>
With my plain OO hat on, I instead model addition as follows:
public class Add : Exp {
public Exp left; // Let’s give a name to this guy.
public Exp right; // ... this one, too ...
}
By contrast, the typical XML data binding technology maps the above schema as follows:
// Unwieldy object model ahead …
public class Add : Exp {
public Exp exp; // Sigh! How would the tool know a better name?
public Exp exp2; // Uuh! The tool applies name mangling.
Here is a list of XSD patterns that regularly imply unwieldy object models:
Essentially, we are talking about the ‘type dimension’ of the infamous ‘X/O impedance mismatch’. If you want to see a more substantial discussion of such mapping problems, here is a shameless plug for my paper on “Revealing the X/O impedance mismatch”. If you wonder whether the various XSD complications are actually encountered in the real world, yes they are, and here is another plug for my paper on the “Analysis of XML schema usage”. As Andrew Farrell points out in his longer comment on my first post in this series, many technologies “can't cope with the more complex aspects of the XSD standard”. I very much agree. I find it important to distinguish between “can’t yet cope” (subject to engineering efforts) versus “really can’t conservatively cope” (i.e., without ‘out-of-box’ thinking). How do you possibly cope with the problem in the addition example above? One desperate answer is ‘by customization’. However, customization is arguably a pain point, just by itself, and I hope to get back on this later in this series.
Talk to a computer scientist and you may hear: “What’s the problem? Trees are degenerated graphs … so C# objects are clever enough to hold XML data.” Thanks for making my argument! So if my plain, schema-derived object model copes with general object graphs, how do I know that, at serialization time, I can make a tree from it? Also, at de-serialization time, how do I stuff XML comments, PIs and interim text into the object graph? Furthermore, I loved the parent axis in my DOM code, why am I supposed to live without parent and friends in the wonderful world of typed XML?
Here goes an example. Your application transforms a given in-memory representation of an XML tree, say an abstract-syntax tree for your favorite programming language. Using the schema-derived object model for the XML-based abstract syntax, the transformation is encoded in terms of basic OO idioms for imperative object manipulation. Suddenly, serialization throws because the DML operations have created a true object graph with cycles and sharing. For instance, let’s wire up an object graph for an Add expression such that serialization is going to throw:
Add a = new Add();
a.left = a; // Sigh!
a.right = a; // Uuh!
XmlSerializer serializer = new XmlSerializer(typeof(Exp));
serializer.Serialize(myWriter,a); // Throw!
Another example follows. You are facing a ‘configuration file scenario’ in your application for which you used DOM-style XML programming so far. You decide to switch to typed XML programming -- so that you are better prepared for future evolution of the configuration files on the grounds of static type checking. Once you deploy your object model, you are hitting sample data that comprises XML comments and processing instructions. Some components actually rely on these bits. It turns out that the chosen XML data binding technology neglects all such XML-isms. So you are hosed. For instance, suppose you are working on Visual Studio automation such that you are rewriting your project files from plain VS 2005 to LINQ preview projects. A snippet of the given “.csproj” file looks as follows:
<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
<!-- To modify your build process, commense as follows.
Add your task inside one of the targets below.
Then uncomment it.
Other similar extension points exist.
See Microsoft.Common.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
-->
Your typed XML rewriting functionality only aims to replace the “.targets” line.
Only later you find out about a bonus feature: the XML comments were eradicated.
<Import Project="$(ProgramFiles)\LINQ Preview\Misc\Linq.targets" />
In my experience, typed XML programmers tend to have larger displays (21inch+) than their untyped fellows. Why is that? I can only guess. A simplistic guess would be that the mere size of XML schemas (when compared to EBNF or Haskell) calls for grown-up displays. I don’t really buy this argument because even untyped XML programmers may need to inspect the XSD contract when coding. Hold on, why would typed XML programmers bother about the XML schema given their schema-derived object model? There is the problem! Without going into detail here, it seems that the typical schema-derived object model and its integration into the normal OO programming workflow is insufficient for understanding the XML domain at hand and designing the solution to XML programming problems. Instead, typed XML programmers of the 1st generation tend to consult different resources pseudo-simultaneously:
1) A sample XML file.
2) An XML schema.
3) A schema-derived object model.
4) Potentially also a generated documentation.
5) … Anything else? …
Even though I am a ‘static typing extremist’, I can see that this scattered approach may lead to cognitive overload. By contrast, have a look at Eric White’s post on “Parsing WordML using XLinq”. He is dealing with XML data of a non-trivial kind (in terms of the underlying XML Schema); yet untyped XML programming, merely based on bullet (1) above, scales pretty well. Now throw the XML schema for WordML at your favorite XML data binding technology, and then attempt recoding Eric’s functionality in a typed manner. Results depend on the chosen technology and your personal algesthesia (i.e., your ability to sense pain).
I am sure there are further problems that one could identify.
In fact, I hope to get some feedback from the readers of this post.
In my next post of this series, I plan to talk about the “the scale of typing” in XML programming. On this scale, I will identify two candidate pain killers for the pains of the 1st generation.
Regards,
Ralf Lämmel
As I am working on the 3 rd post, I thought I should offer an index as follows: Typed XML programmer | http://blogs.msdn.com/xmlteam/archive/2006/08/06/689989.aspx | crawl-002 | refinedweb | 2,077 | 53.71 |
remount, not honored on bind mounts
Bug Description
I was trying to run docker in a nested container. docker wants to remount a bind-mounted dir as ro. Audit log showed this failed. I first tried to add more specific rules, but when those did not work i tried just
remount,
in the policy. Still the mount was denied. Finally when I added 'mount,', it worked.
Ideally I would be able to say
remount options=(ro,bind) -> /var/lib/
Is this still an issue for you on 14.10?
[Expired for apparmor (Ubuntu Utopic) because there has been no activity for 60 days.]
[Expired for apparmor (Ubuntu Trusty) because there has been no activity for 60 days.]
[Expired for apparmor (Ubuntu Precise) because there has been no activity for 60 days.]
[Expired for apparmor (Ubuntu) because there has been no activity for 60 days.]
I just hit this myself with AppArmor 2.9.1 in Debian wheezy. Has this been fixed upstream? I've attached a minimal reproduction.
It's possible that this is a part of the patchset still making its way upstream.
Ash,
can you provide the output of
ls /sys/kernel/
and
apparmor_parser -S <your minimal profile>
the profile binary dump is to just double check that it is the same as what I get locally
John,
Sure thing. Here's my /sys/kernel/
capability caps domain file mount namespaces network policy rlimit
The profile dump is attached. Thanks for having a look! I was just starting to trawl through the source to see if it was something I could patch myself, based on your comment.
I've attached a patch against the 2.9 branch that's working for me. I'm allowing rbind as well as bind because that's the part of the actual call that caused me to discover this. It looks like an equivalent change could be made against master as well:
http://
Should I submit it to the mailing list, too?
Ash,
can you attach the /etc/apparmor.
Hmm, I was scp'ing binaries around and I seem to have broken apparmor_parser on that box at the moment (glibc conflicts - I copied a build from the wrong box by mistake).
I'm travelling over the weekend and early next week - I'll upload it as soon as I have a chance to get that working again.
Ash,
your patch was accepted and forwarded to the list/
/
I've tracked this down to a compiler bug where the bind flag is getting cleared from the flags set for remounts. | https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1272028 | CC-MAIN-2019-39 | refinedweb | 427 | 73.68 |
atlassian_apis 0.2.0
atlassian_apis: ^0.2.0 copied to clipboard
Use this package as a library
Depend on it
Run this command:
With Dart:
$ dart pub add atlassian_apis
With Flutter:
$ flutter pub add atlassian_apis
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: atlassian_apis: ^0.2.0
Alternatively, your editor might support
dart pub get or
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:atlassian_apis/confluence.dart'; import 'package:atlassian_apis/jira_platform.dart'; import 'package:atlassian_apis/jira_software.dart'; import 'package:atlassian_apis/service_management.dart'; | https://pub.dev/packages/atlassian_apis/install | CC-MAIN-2021-31 | refinedweb | 111 | 52.76 |
nickdu + 4 comments
I'm also confused why this is under the queue section. I solved it without a queue, stack, list, etc. Just keep track of how much fuel is in the tank. Once it goes below zero then reset the fuel in the tank to zero and set the pump to the next pump.
#include <iostream> using namespace std; int main() { unsigned int n; cin >> n; long long tank = 0; unsigned int pump = 0; for (unsigned int i = 0; i < n; ++i) { unsigned int liters; unsigned int kilometers; cin >> liters >> kilometers; tank += (long long) (unsigned long long) liters - (long long) (unsigned long long) kilometers; if (tank < 0) { pump = i + 1; tank = 0; } } cout << pump << endl; }
cyanide + 2 comments
According to your algorithm if we supply the following output 4 2 1 1 4 2 2 1 1
it gives the output as 2 wheras technically there is no solution to this scenario as you are only considering from the point until the end of the list whereas it should be considered as a queue which wraps around. Please correct me if I am wrong.
nickdu + 1 comment
It was a while ago so I can't say for sure what my assumptions were. If I had to guess I would say I was under the assumption a solution must exist for the data supplied and thus my algorithm works correctly under that assumption. At least I think it does. Do you have an example where it doesn't?
Kartik1607 + 0 comments
Since no where it is asked to return -1 if solution does not exists, your assumption is correct. A solution must exists. Hence, we do not need to use a circular queue. :)
kapploneon + 1 comment
The problem does not require Queue structre to use but the logic/understanding of the queue helps in breaking down the solution.
setusrijan + 0 comments
According to u passing input 3 1 25 2 1 3 1 answer would be 1,but technically answer should be not possible
Gladdyu + 9 comments
This was the absolute easiest problem 'Difficult' problem on this site so far. Possible in linear time and you don't need any queues whatsoever.
Spoiler alert: Integration and global minima.
Gaurav1207 + 1 comment
agreed
kiner_shah + 1 comment
How does integration involve over here? Can u plz explain? I m nt getting it!
FlintLockwood + 1 comment
I dont it too pls explain.
Gladdyu + 5 comments
Consider the amount of fuel you have in your tank as a function of the amount of stations you have passed (the integral of the change in fuel {gained - used} per station results in the amount of fuel for every station, the fundamental theorem of calculus). Now: this function has to be positive for every station because you cannot ever have negative fuel in your tank. This only happens whenever you start at a global minimum of this function. You are essentially shifting the function vertically, setting it to zero at your starting point and if this is the lowest point then all other points will be positive.
This is assuming that a full circle will not result in a net fuel loss (then you cannot complete the circle) and furthermore that there are no more than 1 global minima (then you get stuck with exactly 0 fuel at the position of the second global minimum).
kaushal087 + 1 comment
Your solution's complexity is O(N^2) as you will start from
0th to Nth
1st to 0th
2nd to 1st
3rd to 2nd
...
Nth to N-1th
each step will take N time and there are N step. So the worst case time complexity will be O(N^2). Correct me if I am wrong. I have done this way and I have used loop inside loop and it's time complexity is O(N^2).
morcanie + 2 comments
I don't think that's the method Gladdyu is proposing. His method would have worst case complexity of O(N), because all you do is go through the loop once, and at each stop record the total amount of gas up to that point, which could be negative if there is a greater amount of distance traveled to that point than gas gained. Then the point with the minimum amount of gas is the answer.
Gladdyu, I think there's one error in what you said though; if there are two global minima, it doesn't matter, because that would just mean that you'd get to the second minima with 0 gas, but gain however much gas is there, so you can keep going.
drew_verlee + 1 comment
Is there a comment from Gladdyu im missing? He isnt purposing an algorithm anywhere I can see (a required element for their to be an upper bound).
drew_verlee + 0 comments
without any investigation my guess is a mod, persumable because you linked a solution of some sort. They should do a [comment deleted because xyz] not just ninja remove them. Again this speculation.
Thanks for the update.
chetangautam + 0 comments
I didnt understand your solution. Can you please explain again? I solved it with O(N^2). Thanks.
mehtabhavesh9 + 0 comments
Clarification - You start one station after the station you see the global minimum so you never hit the global minimum.
piyushmishra + 1 comment
I have read that it is impossible to find a global minimum for any arbitary function.Could it be a case here?
btamada + 1 comment
I think the point of the challenge is to solve the problem using a Queue, rather than just trying to solve it using any possible solution.
PRASHANTB1984 + 0 comments
Thanks for the feedback. However, the level of difficulty sometimes varies as perceived by differnet users, to some extent!
driftwood + 0 comments
Can't be that easy as you failed to solve it using queue, as the problem and category intended!
codeharrier + 0 comments
It's about queues insofar as the input stream is a queue. So, it's a degenerate case.
thebick + 0 comments
Don't need integration, minima, or any such. Don't even need a queue or container, really.
Either the problem is solvable or it's not -- but the problem statement doesn't leave room for unsolvable.
Thus assuming it's solvable, and you're looking for the lowest index from which it works, vinay_star_lord's solution (which I came up with on my own) works without a container, i.e., constant space.
mk9440 + 0 comments
Simple Array operation :
public class Solution { public static void main(String[] args) { Scanner in=new Scanner(System.in); int n=in.nextInt(); int litr[]=new int [n]; int km[]=new int [n]; long cap_value[]=new long[n]; for(int i=0;i<n;i++){ litr[i]=in.nextInt(); km[i]=in.nextInt(); cap_value[i]=(litr[i]-km[i]); } int pos=0; for(int i=0;i<n;i++){ long ptl=0;int check=0; for(int j=0;j<n;j++){ ptl+=cap_value[(j+pos)%n]; if(ptl<0){++pos;check=-9;break;} } if(check==0){System.out.print(pos);break;} } } }
a_anofriichuk + 8 comments
Queue-based solutuon, just in case...
struct gasStation { int gas; int next; }; int main() { int N; cin >> N; queue <struct gasStation> route; for (int i = 0; i < N; i++) { struct gasStation st; cin >> st.gas >> st.next; route.push(st); } int start = 0, passed = 0, gas = 0; while (passed < N) { struct gasStation st = route.front(); gas += st.gas; route.pop(); if (gas >= st.next) { passed += 1; gas -= st.next; } else { start += passed + 1; passed = 0; gas = 0; } route.push(st); } cout << start << endl; return 0; }
Snigdha_Sanat + 3 comments
Thanks man. It passed all the given test cases. But this one(not given in hackerrank) failed:
7
5 2
1 2
3 3
2 5
6 2
1 5
1 6
Or m I missing something?
a_anofriichuk + 2 comments
Sorry, I have no access to my laptop to check it. What is result of my code on this input, and what is the answer/sequence you expect for fhis input?
Snigdha_Sanat + 2 comments
This goes into an infinite loop. What happens is-for a series of 0 to 6 here, it exhausts first at the node 4, starts off again at 4, and then after exhauting at node 6, arrives at 0 again, and the cycle repeats. We can use a hashmap mayB to mark off the ones used as start node. Just thinking out aloud.
The problem did not specify what to ouput for an invald input, but we can output -1 or something.
aishik_swarez + 1 comment
Yeah it is because you are never going to reach the starting point no matter from where you start from. If you check properly you total fuel is more than the total distance to be covered where mileage is 1km per litre
noelkali + 0 comments
That should be "total fuel is LESS than the distance to be covered" so there's no solution. The problem guarantees a solution for given data so this data (total fuel = 19, total distance = 25) does not qualify. There is always a solution if (total fuel >= total distance) but no solution if (total fuel < total distance).
Mr_DarkSider + 0 comments
Total Sum At All Petrol Pumps Petrol = 19 Ditance = 25 > 19
So it will go into infinite loop. Because Total Distance Itself is not withing the total capacity Of all the petrol pumps.
It was a simple check mam...!!
atique + 0 comments
Thanks for this. This is an elegant idea. Because of this I can agree that this problem can be used to practice queue data structure. Following this idea I have implemented following,
public class TruckTour { public Queue<int> PPQueue; public void TakeInput() { PPQueue = new Queue<int>(); int n = int.Parse(Console.ReadLine()); while (n-- > 0) { string[] tokens = Console.ReadLine().Split(); PPQueue.Enqueue(int.Parse(tokens[0]) - int.Parse(tokens[1])); } } // get 0 based index of starting Petrol Pump public int GetStartingPumpIndex() { // start with an initial queue // start index is 0 // keep popping items // whenever it fails forward it to the next tage.. int pass_count = 0; int s_fa = 0; int pp_start_index = 0; while (pass_count < PPQueue.Count) { int r_fa = PPQueue.Dequeue(); // residue of fuel amount s_fa += r_fa; if (s_fa < 0) { s_fa = 0; pp_start_index += pass_count + 1; pass_count = 0; } else pass_count++; PPQueue.Enqueue(r_fa); } return pp_start_index; } }
For better understanding it's good to look at 2 goals specified by the problem description,
- Calculate the first point from where the truck will be able to complete the circle
- An integer which will be the smallest index of the petrol pump from which we can start the tour.
driftwood + 1 comment
C++ USERS
This problem can be completed in O(N) (linear) time complexity using
std::queue<type>; if you've not utilized
queuewhen solving the problem, you've not been successful (mind the category that the problem is in...)
HINTS:
- You require only the standard constructor class of
std::queue<type>
- O(N) is possible; you must only iterate through the
queue<type>once
- Mind the size of input variables; set
typeaccordingly
-
queue<type>'s
typemust make use of
std::pair<type,type>
GrahamS + 1 comment
I used a
<vector>rather than a
<queue>, and then just wrapped the index around to make it a circular queue. Still a "queue-based solution" but a bit more efficient than repeatedly pushing and popping elements.
struct Pump { int fuel; int distanceToNext; Pump(int f,int d) : fuel(f), distanceToNext(d) {} }; bool getTruckin (vector<Pump>& pumps, int startPump) { int fuelInTank = 0; int currentPump = startPump; while (true) { fuelInTank += pumps[currentPump].fuel; // Fill her up if (fuelInTank < pumps[currentPump].distanceToNext) { // Not enough fuel to reach next pump return false; } else { int nextPump = (currentPump + 1) % pumps.size(); if (nextPump == startPump) { return true; // Completed loop } else { fuelInTank -= pumps[currentPump].distanceToNext; currentPump = nextPump; // Keep on truckin } } } }
jyotsanamall12 + 0 comments
hey i used queues too...but i am getting a wrong answer for test cases in which n= 100000. where is this code going wrong. please help!
void queue :: circle() { node *now= front; int Pleft= 0, c= 0, index= 0; node *temp= now, *ptr= now; while(temp->next != now) { Pleft+= temp->petrol; //cout<<"pleft: "<<Pleft<<" distance: "<<temp->dist<<endl; if(Pleft < temp->dist) { temp= ptr->next; now= temp; ptr= temp; Pleft= 0; c= 0; } else { c++; if(c == 1) { ptr= temp; index= temp->petrol; } Pleft-= temp->dist; // cout<<"in else pleft: "<<Pleft<<endl; temp= temp->next; } } int flag= 0; while(front->next != front) { if(index == front->petrol) { cout<<flag<<endl; break; } flag++; front= front->next; } }
vinay_star_lord + 4 comments
Easiest approach:
N=input()
sumi=0;
maxi=0;
j=0;
for i in range(N):
a,b=raw_input().split() sumi+=int(a)-int(b) if(sumi<0): sumi=0 j=i+1
print(j)
t_egginton + 2 comments
as you move through the list, you sum up the Diff = (fuel-distance). If this value ever drops below 0 then it means you can't start from any point up to this point and still make it past this point. You therefore set the next possible point as the pump after your current one and try again.
parmarabhishek_1 + 0 comments
" you can't start from any point up to this point and still make it past this point " how did you fifure out that one
MPrashanthi + 1 comment
This was what i wanted to know!!!
maximshen + 3 comments
I understand what you are doing here. But suppose you find j as the smallest starting point, your program makes sure there is enough gas to finish the remaining routes from j to N-1, since the route is a circle, but how can you guarantee your gas can cover the route from 0 to j-1 ?
vinay_star_lord + 1 comment
Hey, Thanks for pointing it out!! I think this serves it,
public class Solution {
public static void main(String[] args) { Scanner in = new Scanner(System.in); int N =in.nextInt(); int sum = 0; int position = 0; int residue = 0; int a,b; int preA=0; int preB=0; for(int i=0;i<N;i++){ a = in.nextInt(); b = in.nextInt(); sum =sum +(a-b); if(sum<0){ residue+=sum; sum=0; position = i+1; preA=a; preB=b; } if(i==N-1){ if(sum+residue>=0){ System.out.println(position); } else{ if(position<N-1){ i=position+1; position=i; sum=0; residue+=preA+preB; } else{ System.out.println(-1); } } } } }
}
ujjwalsaxena + 0 comments
Its not mentioned. We can assume there is always a solution for this problem
Imvishalsr + 0 comments
Hi maximshen. There is no point going back to 0 to j-1 because you already know they can't be the starting point as they already have failed. However, it will be useful to know if they cant have a starting point since there is not enough fuel to complete the circle no matter where you start. But I believe all the test cases have solutions. So, there is really no need to bother about 0 to j-1.
cpu_meltdown + 1 comment
I don't really get this yet. The goal is to make the truck start at the pump which will make it travel the least distance? but what does the statement "The truck will move one kilometer for each litre of the petrol." means?
rohit_bokam97 + 0 comments
q = [] n = int(input()) temp = 0 for _ in range(n): f,d = map(int,input().split(' ')) temp = temp + (f-d) q.append(temp) pos = q.index(min(q)) if pos == n-1: print(0) else: print(pos + 1)
Sort 102 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/truck-tour/forum | CC-MAIN-2018-51 | refinedweb | 2,600 | 70.73 |
The
package because
those types are not safe enough (for example
Request is an instance
of
IsString and, if it's malformed, it will blow up at
run-time).
“Expandable” refers to the ability of the library to be expanded.
Using with other libraries
- You won't need the low-level interface of
of the time, but when you do, it's better to do a qualified import, because
naming conflicts with
req.
- For streaming of large request bodies see the companion package
req-conduit:
Lightweight, no risk solution
The library uses the following mature packages under the hood to guarantee you the best experience:
- level HTTP client used everywhere in Haskell.
- (HTTPS) support for
It's important to note that since we leverage well-known libraries that
the whole Haskell ecosystem uses, there is no risk in using
req. The
machinery for performing requests is the same as with
and
wreq. The only difference is the API.
- {
- :: Maybe Proxy
- :: Int
- :: Maybe Manager
- :: forall b. Request -> Response b -> ByteString -> Maybe HttpExceptionContent
- :: RetryPolicy
- ::
- http :: Text -> Url Http
- :: Text -> Url Https
- (/~) :: ToHttpApiData a => Url scheme -> a -> Url scheme
- (/:) :: Url scheme -> Text -> Url scheme
- parseUrlHttp :: ByteString -> Maybe (Url Http, Option scheme)
- parseUrlHttps :: ByteString -> Maybe (Url Https, Option scheme)
-
- (=:) :: (QueryParam param, ToHttpApiData a) => Text -> a -> param
- queryFlag :: QueryParam param => Text -> param
- class QueryParam param where
- header :: ByteString -> ByteString -> Option scheme
- cookieJar :: CookieJar -> Option scheme
- basicAuth :: ByteString -> ByteString -> Option Https
- basicAuthUnsafe :: ByteString -> ByteString -> Option scheme
- oAuth1 :: ByteString -> ByteString -> ByteString -> ByteString -> Option scheme
- oAuth2Bearer :: ByteString -> Option Https
- oAuth2Token :: ByteString -> Option Https
- port :: Int -> Option scheme
- decompress :: (ByteString -> Bool) -> Option scheme
- responseTimeout :: Int -> Option scheme
- ::.Exception (throwIO) import Control.Monad import Data.Aeson import Data.Maybe (fromJust) import Data.Monoid ((<>)) import Data.Text (Text) import GHC.Generics import Network.HTTP.Req import qualified Data.ByteString.Char8 as B instance MonadHttp IO where handleHttpException = throwIO
We will be making requests against the service.
Make a GET request, grab 5 random bytes:
main :: IO () main = do let n :: Int n = 5 bs <- req GET ( " /: "bytes" /~ n) NoReqBody bsResponse mempty B.putStrLn (responseBody bs)
The same, but now we use a query parameter named
"seed" to control
seed of the generator:
main :: IO () main = do let n, seed :: Int n = 5 seed = 100 bs <- req GET ( " /: "bytes" /~ n) NoReqBody bsResponse $ "seed" =: seed B.putStrLn (responseBody bs)
POST JSON data and get some info about the POST request:
data MyData = MyData { size :: Int , color :: Text } deriving (Show, Generic) instance ToJSON MyData instance FromJSON MyData main :: IO () main = do let myData = MyData { size = 6 , color = "Green" } v <- req POST ( " /: "post") (ReqBodyJson myData) jsonResponse mempty print (responseBody v :: Value)
Sending URL-encoded body:
main :: IO () main = do let params = "foo" =: ("bar" :: Text) <> queryFlag "baz" response <- req POST ( " /: "post") (ReqBodyUrlEnc params) jsonResponse mempty print (responseBody response :: Value)
Using various optional parameters and URL that is not known in advance:
main :: IO () main = print (responseBody response :: Value)
A version of
req that does not use one of the predefined instances of
HttpResponse but instead allows the user to consume
manually, in a custom way.
Response
BodyReader
Since: 1.0.0
Mostly like
req with respect to its arguments, but accepts a callback
that allows to perform a request in arbitrary fashion.
This function does not perform handling/wrapping exceptions, checking
response (with
and retrying. It only prepares
Request and allows you to use it.
Since:.
data HttpConfig Source #
HttpConfig contains general and default settings to be used when
making HTTP requests.
A monad that allows to run
req in any
IO-enabled monad without
having to define new instances.
Since: 0.4.0
Run a computation in the
Req monad with the given
HttpConfig. In
case of exceptional situation an
HttpException will be thrown.
Since:.
class HttpMethod a where Source #
A type class for types that can be used as an HTTP method. To define a
non-standard method, follow this example that defines
COPY:
data COPY = COPY instance HttpMethod COPY where type AllowsBody COPY = 'CanHaveBody Proxy = "COPY".
:: Proxy a -> ByteString Source #
Return name of the method as a
ByteString.
URL
We use
Urls which are correct by construction, see
Url. To build a
Url from a
ByteString, use
parseUrlHttp or
parseUrlHttps.
Request's
Url. Start constructing your
Url with
http or
specifying the scheme and host at the same time. Then use the
Urls (make sure the
OverloadedStrings language extension is enabled).
/:)
Examples
http " --
" --
" /: "encoding" /: "utf8" --
" /: "foo" /: "bar/baz" --
" /: "bytes" /~ (10 :: Int) --
"юникод.рф" --
http :: Text -> Url Http Source #
:: “ scheme..
newtype ReqBodyFile Source #
This body option streams request body from a file. It is expected that the file size does not change during the streaming.
Using of this body option does not set the
Content-Type header.
newtype ReqBodyBs Source #
HTTP request body represented by a strict
ByteString.
Using of this body option does not set the
Content-Type header.
newtype ReqBodyLbs Source #
HTTP request body represented by a lazy
ByteString.
Using of this body option does not set the
Content-Type header.:
queryFlag.
(and
=:)
This body option sets the
Content-Type header to
"application/x-www-form-urlencoded" value.
data FormUrlEncodedParam Source #
An opaque monoidal value that allows to collect URL-encoded parameters
to be wrapped in
ReqBodyUrlEnc..
Example
import Control.Exception (throwIO) import qualified Network.HTTP.Client.MultipartFormData as LM import Network.HTTP.Req instance MonadHttp IO where handleHttpException = throwIO main :: IO () main = do body <- reqBodyMultipart [ LM.partBS "title" "My Image" , LM.partFileSource "file1" "/tmp/image.jpg" ] response <- req POST (http "example.com" /: "post") body bsResponse mempty print $ responseBody response
Since: 0.2.0
reqBodyMultipart :: MonadIO m => [Part] -> m ReqBodyMultipart Source #
Create
ReqBodyMultipart request body from a collection of
Parts.
Since: 0.2.0
class HttpBody body where Source #
A type class for things that can be interpreted as an HTTP
RequestBody.
getRequestBody :: body -> RequestBody Source #
How to get actual
RequestBody.
getRequestContentType :: body -> Maybe ByteString Source # Source #
The opaque
Option type is a
Monoid you can use to pack collection
of optional parameters like query parameters and headers. See sections
below to learn which
Option primitives are available..
queryParam :: ToHttpApiData a => Text -> Maybe a -> param Source #
Headers.
The
Option adds basic authentication.
See also:
The
Option adds an OAuth2 bearer token. This is treated by many
services as the equivalent of a username and password.
The
Option is defined as:
oAuth2Bearer token = header "Authorization" ("Bearer " <> token)
See also: :
Other
port :: Int -> Option scheme Source #
Specify the number of microseconds to wait for response. The default
value is 30 seconds (defined in
ManagerSettings of connection
Manager).
HTTP version to send to the server, the default is HTTP 1.1.
Response
Response interpretations
data IgnoreResponse Source #
Make a request and ignore the body of the response.).
jsonResponse :: Proxy (JsonResponse a) Source #
Use this as the fourth argument of
req to specify that you want it to
return the
JsonResponse interpretation.
data BsResponse Source #
Make a request and interpret the body of the response as a strict
ByteString.
type HttpResponseBody response :: * Source #
The associated type is the type of body that can be extracted from an
instance of
HttpResponse.
toVanillaResponse :: response -> Response (HttpResponseBody response) Source #
getHttpResponse :: Response BodyReader -> IO response Source #.
Other
data HttpException Source #
Exceptions that this library throws.
data CanHaveBody Source #
A simple type isomorphic to
Bool that we only have for better error
messages. We use it as a kind and its data constructors as type-level
See also:
HttpMethod and
HttpBody. | https://hackage.haskell.org/package/req-1.0.0/docs/Network-HTTP-Req.html | CC-MAIN-2022-21 | refinedweb | 1,230 | 55.64 |
Summary
Last week I started a Python 3000 FAQ, and solicited additional questions. Here are some additional answers.
This is a sequal to last week's post. It's tempting to replace most Q/A pairs below with
Q. Will Python 3000 have feature X (which has not been proposed yet)?
A. No. The deadline for feature proposals (PEPs) was April 30, 2007.
but I figured it would be better to try to explain why various ideas (most of which aren't new) weren't proposed, or why they were rejected.
Q. Will implicit string concatenation be removed in Python 3000? (I.e., instead of ("a" "b") you'd have to write ("a" + "b").)
A. No. This was proposed in PEP 3126, but rejected.
Q. Will the binary API for strings be standardized in Python 3000? (Depending on a compile-time switch, Unicode strings use either a 2-byte wide or 4-byte wide representation.)
A. No, there are still compelling reasons to support 2 bytes in some cases and 4 bytes in others. Usually this is dealt with by compiling from source with the headers corresponding to the installed Python binary. If that doesn't work for you, and you really care about this, I recommend that you bring it up on the python-3000 mailing list, explaining your use case.
Q. Why isn't the GIL (Global Interpreter Lock) recursive?
A. Several reasons. Recursive locks are more expensive, and the GIL is acquired and released a lot. Python's thread package doesn't implement recursive locks in C (they are an add-on written in Python, see RLock in threading.py). Given the different thread APIs on different platforms it's important that the C code involved in threads is minimal. But perhaps the most important reason is that the GIL often gets released around I/O operations. Releasing only a single level of a recursive lock would not be correct here; one would have to release the underlying non-recursive lock and restore the recursion level after re-acquiring. This is all rather involved. A non-recursive lock is much easier to deal with.
Q. Will we be able to use statements in lambda in Python 3000?
A. No. The syntax (turning indentation back on inside an expression) would both awkward to implement and hard to read for humans. My recommendation is just to define a local (i.e., nested) function -- this has the same semantics as lambda without the syntactic restrictions. After all, this:
foo = lambda: whatever
is completely equivalent to this:
def foo(): return whatever
(except that the lambda doesn't remember its name).
Q. Will Python 3000 require tail call optimization?
A. No. The argument that this would be a "transparent" optimization is incorrect -- it leads to a coding style that essentially depends on tail call optimization, at which point the transparency is lost. (Otherwise, why bother asking to require it? :-) Also, tracebacks would become harder to read. Face reality -- Python is not a functional language. It works largely by side effects on mutable objects, and there is no opportunity for program transformation based on equivalent semantics.
Q. Will Python 3000 provide "real" private, protected and public?
A. No. Python will remain an "open kimono" language.
Q. Will Python 3000 support static typing?
A. Not as such. The language would turn into Java-without-braces. However, you can use "argument annotations" (PEP 3107) and write a decorator or metaclass to enforce argument types at run-time. I suppose it would also be possible to write an extension to pychecker or pylint that used annotations to check call signatures.
Q. Why doesn't str(c for c in X) return the string of concatenated c values?
A. Then, to be consistent, str(['a', 'b', 'c']) would have to return 'abc'. I don't think you want that. Also, what would str([1, 2, 3]) do? It's a grave problem if str() ever raises an exception because that means the argument cannot be printed: print calls str() on each of its arguments.
Have an opinion? Readers have already posted 22 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Guido van van Rossum adds a new entry to his weblog, subscribe to his RSS feed. | http://www.artima.com/weblogs/viewpost.jsp?thread=211430 | CC-MAIN-2016-07 | refinedweb | 718 | 66.03 |
Introduction
Bash is a command language that typically comes as a command-line interpreter program where users can execute commands from their terminal software. For example, we can use Ubuntu’s terminal to run Bash commands. We can also create and run Bash script files through what is known as shell scripting.
Programmers use shell scripts in many automation scenarios, such as for build processes, and CI/CD- or computer maintenance-related activities. As a full-featured command language, Bash supports pipelines, variables, functions, control statements, and basic arithmetic operations.
However, Bash is not a general purpose developer-friendly programming language. It doesn’t support OOP, structures like JSON, common data structures other than arrays, and built-in string or array manipulation methods. This means programmers often have to call separate Python or Node scripts from Bash for such requirements.
This is where the zx project comes in. zx introduced a way to write Bash-like scripts using JavaScript.
JavaScript, by comparison, has almost all the inbuilt features that developers need. zx lets programmers write shell scripts with JavaScript by providing wrapper APIs for several crucial CLI-related Node.js packages. Therefore, you can use zx to write developer-friendly, Bash-like shell scripts.
In this article, I will explain zx and teach you how to use it in your projects.
Comparing Bash and zx
Bash is a single-pass interpreted command language initially developed by Brian Fox. Programmers often use it with the help of Unix or Unix-like commands.
Most of the time, Bash starts separate processes to perform different sub-tasks. For example, if you use the
expr command for arithmetic operations, the Bash interpreter will always spawn another process.
The reason is that
expr is a command-line program that needs a separate process to run. Your shell scripts may look complex when you add more logic to their script files. Your shell scripts may also end up performing slowly due to the spawning of additional processes and interpretation.
The zx project implements a shell script executor similar to Bash but using JavaScript modules. It provides an inbuilt asynchronous JavaScript API to call other commands similar to Bash. Besides that, it provides wrapper functions for several Node.js-based command-line helpers such as chalk, minimist,
fs-extra, OS, and Readline.
How does zx work?
Every zx shell script file has
.mjs as the extension. All built-in functions and wrappers for third-party APIs are pre-imported. Therefore, you don’t have to use additional import statements in your JavaScript-based shell scripts.
zx accepts scripts from standard input, files, and as a URL. It imports your zx commands set as an ECMAScript module (MJS) to execute, and the command execution process uses Node.js’s child process API.
Now, let’s write some shell scripts using zx to understand the project better.
zx scripting tutorial
First, you need to install the zx npm package globally before you start writing zx scripts. Make sure that you have already installed Node.js v14.8.0 or higher.
Run the following command on your terminal to install the zx command line program.
npm install -g zx
Enter
zx in your terminal to check whether the program was installed successfully. You will get an output like below.
The basics of zx
Let’s create a simple script to get the current branch of a Git project.
Create
get_current_branch.mjs inside one of your projects, and add the following code.
#!/usr/bin/env zx const branch = await $`git branch --show-current` console.log(`Current branch: ${branch}`)
The first line is the shebang line that tells the operating system’s script executor to pick up the correct interpreter. The
$ is a function that executes a given command and returns its output when it’s used with the
await keyword. Finally, we use
console.log to display the current branch.
Run your script with the following command to get the current Git branch of your project.
zx ./get_current_branch.mjs
It will also show every command you’ve executed because zx turns its verbose mode on by default. Update your script as below to get rid of the additional command details.
#!/usr/bin/env zx $.verbose = false const branch = await $`git branch --show-current` console.log(`Current branch: ${branch}`)
You can run the script without the zx command as well, thanks to the topmost shebang line.
chmod +x ./get_current_branch.mjs ./get_current_branch.mjs
Coloring and formatting
zx exposes the chalk library API, too. Therefore, we can use it for coloring and formatting, as shown below.
#!/usr/bin/env zx $.verbose = false let branch = await $`git branch --show-current` console.log(`Current branch: ${chalk .bgYellow .red .bold(branch)}`)
More coloring and formatting methods are available in chalk’s official documentation.
User inputs and command-line arguments
zx provides the
question function to capture user inputs from the command line interface. You can enable traditional Unix tab completion as well with the
choices option.
The following script captures a filename and template from the user. After that, it scaffolds a file using the user-entered configuration. You can use the tab completion with the second question.
#!/usr/bin/env zx $.verbose = false let filename = await question('What is the filename? ') let template = await question('What is your preferred template? ', { choices: ["function", "class"] // Enables tab completion. }) let content = "" if(template == "function") { content = `function main() { console.log("Test"); }`; } else if(template == "class") { content = `class Main { constructor() { console.log("Test"); } }`; } else { console.error(`Invalid template: ${template}`) process.exit(); } fs.outputFileSync(filename, content)
A parsed command-line arguments object is available as the global
argv constant. The parsing is done using the minimist Node.js module.
Take a look at the following example that captures two command-line argument values.
#!/usr/bin/env zx $.verbose = false const size = argv.size; const isFullScreen = argv.fullscreen; console.log(`size=${size}`); console.log(`fullscreen=${isFullScreen}`);
Run the above script file as shown below to check the command line argument’s support.
./yourscript.mjs --size=100x50 --fullscreen
Network requests
Programmers often use the
curl command to make HTTP requests with Bash scripts. zx offers a wrapper for the node-fetch module, and it exposes the specific module’s API as
fetch. The advantage is that zx doesn’t spawn multiple processes for each network request like Bash does with
curl — because the node-fetch package uses Node’s standard HTTP APIs for sending network requests.
Let’s make a simple HTTP request to get familiar with zx’s network requests API.
#!/usr/bin/env zx $.verbose = false let response = await fetch(''); if(response.ok) { console.log(await response.text()); }
The above zx script will download and show the content of the specific URL with the help of the node-fetch module. It doesn’t spawn a separate process like Bash’s network calls.
Constructing command pipelines
In shell scripting, pipelines refer to multiple sequentially-executed commands. We often use the well-known pipe character (
|) inside our shell scripts to pass output from one process to another. zx offers two different approaches to build pipelines.
We can use the
| character with the commands set similar to Bash scripting — or we can use the
.pipe() chain method from zx’s built-in API. Check how pipelines are implemented in both ways in the following example script.
#!/usr/bin/env zx $.verbose = false // A pipeline using | let greeting = await $`echo "Hello World" | tr '[l]' [L]` console.log(`${greeting}`) // The same pipeline but with the .pipe() method greeting = await $`echo "Hello World"` .pipe($`tr '[l]' [L]`) console.log(`${greeting}`)
Advanced use cases
Apart from JavaScript-based shell scripting support, zx supports several other useful features.
By default, zx uses a Bash interpreter to run commands. We can change the default shell by modifying the
$.shell configuration variable. The following script uses the
sh shell instead of
bash.
$.shell = '/usr/bin/sh' $.prefix = 'set -e;' $`echo "Your shell is $0"` // Your shell is /usr/bin/sh
You can use the zx command-line program to execute a particular Markdown file’s code snippets written in JavaScript. If you provide a Markdown file, the zx command-line program will parse and execute code blocks.
Let’s look at an example. Download this example Markdown file from the zx GitHub, and save it as
markdown.md. After that, run the following command to execute code blocks.
zx markdown.md
The zx command-line program can also run scripts from a URL. Provide a link to your zx script the same way you’d provide a filename. The following remote script will display a greeting message.
zx
You can import the
$ function from your Node-based web applications as well. Then, it is possible to run commands from your web application’s backend.
Import zx’s
$ function as shown below to call the operating system’s commands from other JavaScript source files.
import { $ } from 'zx' await $`whoami`
Using zx with TypeScript
zx has TypeScript definitions as well, though full support has yet to come. Therefore, programmers can use all of zx’s inbuilt APIs with TypeScript. We can directly provide TypeScript files as zx files to the zx command-line program. Then, zx will transpile and execute the provided TypeScript source files.
Moreover, it is possible to use zx in your TypeScript-based web applications to execute the operating system’s commands.
Conclusion
Bash scripting is a great way to automate your development processes. But, when your Bash scripting becomes complex, you may have to write separate scripts with other programming languages sometimes.
The zx project provides an easy way to write Bash-like scripts with JavaScript and TypeScript. It offers Bash-like minimal APIs to give a shell-scripting feel to what we’re doing — even if we are writing a JavaScript source file.
Besides, zx motivates developers to write JavaScript-based shell scripts without semicolons to make zx scripts and Bash scripts syntactically similar.
However, zx is not a replacement for Bash — it uses a command-line interpreter (Bash by default) internally to execute commands anyway.. | https://blog.logrocket.com/writing-js-based-bash-scripts-zx/ | CC-MAIN-2022-05 | refinedweb | 1,677 | 58.38 |
Not exactly sure what I am doing wrong...it is probably something simple and just being an idiot today but I have these two errors in my program and not sure what I am doing wrong
C:\Documents and Settings\Don & Diane Kruep\My Documents\Judges.java:54: averageScores(double[]) in Judges cannot be applied to (double)
average = averageScores(score);
^
C:\Documents and Settings\Don & Diane Kruep\My Documents\Judges.java:67: operator + cannot be applied to double,double[]
total = total + g;
my code is:
//import classes import java.io.*; public class Judges { public static void main(String[] args) throws IOException { //declaring variables BufferedReader dataIn = new BufferedReader(new InputStreamReader(System.in)); float average; double score = 0.0; boolean done = false; //Get input from user for grades and assign them to an array for (double i = 0; i <= 8; i++) { System.out.print("Enter score # 1:"); System.out.print("Enter score # 2:"); System.out.print("Enter score # 3:"); System.out.print("Enter score # 4:"); System.out.print("Enter score # 5:"); System.out.print("Enter score # 6:"); System.out.print("Enter score # 7:"); System.out.print("Enter score # 8:"); done = false; //use a while loop to keep for counter from increasing when invalid entry is made while(!done) { try { score = Double.parseDouble(dataIn.readLine()); if ((score < 0.0) && (score > 10.0)) throw new NumberFormatException(); //reasonable check else done = true; //exit while is grade is valid } catch(NumberFormatException e) { System.out.print("** Invalid Entry ** \nPlease re-enter score"); } } //end while } //end for //call a method to calculate the average average = averageScores(score); //Display grades and average System.out.println(""); System.out.println("The average grade is " + average); } //end main //method used to calculate the average public static double averageScores(double[] g) { double total = 0, a; for (double i = 0; i<g.length; i++); total = total + g; a = total / g.length; return a; } //end averageGrades() } //end class
Instructions say:
Arrays and For loops
In a diving competition, each contestant’s score is calculated by dropping the lowest and highest scores and then adding the remaining scores. Write a program (Judges.java) that allows the user to enter 8 judges’ scores into an array of doubles and then outputs the points earned by the contestant.
1. you can create either a command prompt console application or a console application using dialog boxes for this program
2. accept the user input as an array of doubles… accept 8 scores
3. only accept scores between 1.0 and 10.0 (include the endpoints)
4. make sure your user is entering valid scores only… if an invalid entry is made, prompt them with an error message and allow them to re-enter that score
5. in your prompt to the user (whether it is the original prompt or the prompt after an invalid entry was made), identify the number of the score they are entering (e.g., Enter score # 5:)
6. calculate the minimum and maximum scores using the functions from the Math class described in Chapter 3. (HINT: you can use these functions in a for loop to compare two numbers at a time to determine which is the max (or min))
7. calculate the total (the sum of the remaining 6 scores)
8. display the min, max, and total formatting each to two decimal places
Example:
Enter score # 1: 9.2
Enter score # 2: 9.3
Enter score # 3: 9.0
Enter score # 4: 9.9
Enter score # 5: 9.5
Enter score # 6: 9.5
Enter score # 7: 9.6
Enter score # 8: 9.8
The maximum score is 9.90
The minimum score is 9.00
The total score is 56.90 | https://www.daniweb.com/programming/software-development/threads/117643/first-time-using-arrays | CC-MAIN-2017-26 | refinedweb | 608 | 60.11 |
Tell us what you think of the site.
Hi,
I got a beginner/intermediate question on having duplicate nCloth systems. First of all: i think the nDynamics are great!
My nCloth system is setup, and works perfectly as i planned it to work. Now it is working, i would like two have *two* of them in the scene both working independently but using the same laws, and i don’t seem to be able to get that right.
First try was to duplicate special the group the system was in with all different options, second try was to reference the system and use a namespace. Both ways just turned out messy and none of them brought me the second nCloth system.
Would anyone be kind enough to point me in the right direction what i might be doing wrong or what to watch out for?
Thanks a lot,
Aksel
How bout saving your nCloth settings as a preset - and applying it to your ‘new’ nCloth object?
n8skow [FA]
Unfortunately that is no option, as the system consists of round 20 nCloth objects with many constraints in between them. I just need a replica of that, and i still couldn’t figure out how. But thanks for your reply!
Import your scene back into the same scene. This will generate duplicates of everything with all of their connections intact. You should have two nucleus solvers at this point, causing each to solve separately.
Todd Palamar
Todd, yep! That reimport did the trick, thanks a bunch!!!
May i ask you, is there any way the same would work with a reference as well? Because referencing, other than import, will mess up my nconstraints for some reasons.
Thanks, Aksel
I’ve noticed this is really the only way to get a real duplicate, without recreating the entire setup by hand. I’ve not found a way other than this. | http://area.autodesk.com/forum/autodesk-maya/dynamics/ncloth-multiple-systems/page-last/ | crawl-003 | refinedweb | 316 | 81.83 |
Kaspersky Source Code In the Wild 154
mvar writes "The source code of an older version of 'K."
And, in other news... (Score:2)
And, in other news, Microsoft has released Windows 95 to rapturous applause.
Is there a difference?
How many people (perhaps apart from malware writers) will really be affected by this disclosure of the source for some 4-year-old software?
Re:And, in other news... (Score:5, Insightful)
Re: (Score:2)
Not as much as you imply, seeing that the DOS-based platform and Windows 9x were both abandoned in favor of the NT-based platform (which traces back to OS/2).
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
>>>The designer of NT came from a VMS background but NT was not based on VMS [or OS/2] code.
FTFY. And Netscape's designers came from their previous creation Mosaic for Amiga, Mac, and PC, but Netscape was not based on Mosaic code. Many moons later the Mozilla Suite spun-off from the never-released Netscape 5, and eventually became Seamonkey, but lo the users were not happy with Seamonkey's bloat, so they split-off the browser half and called it Firefox. And it was good.
Thus spake the book of moz
Re: (Score:2)
Not really, the old Navigator was just called the Mozilla suite until Firefox shipped. The Seamokey project is run by a group that still wanted to continue development of the suite, which by the way is now no bigger than today's bloaty Firefox, used the same engine so displays pages exactly as well but offers more features and is an all around SUPERIOR browser. Firefox was good when it was actually smaller but these days is pretty pointless. What the should do is keep the FF name because its well markete
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That may be so, but its not the bottom in kernel level stuff anyone is interested in the Windows code base leaking for (well some crackers and other criminals might be) there are plenty of FOSS kernels that are every bit as good on NT to choose from. What's good about Windows is the stack of libraries. Lots of those are present in WIndows 9x and the complete source to Windows 95 even today would be of great use to someone who wanted to support win32 subsystems on top of some other platform.
Re: (Score:2)
It would probably be a boon to the WINE project, if nothing else.
Re: (Score:1)
Re:And, in other news... (Score:5, Informative)
Actually MSFT releasing the Win9X source would be WONDERFUL news, because if you haven't tried it Win9X can make a great embedded OS [embeddingwindows.com] with better driver support and lower specs than pretty much any embedded OS out there.
And as for why anyone would care about TFA, that's simple: Often you don't "throw the baby out with the bathwater" and significant portions of the code will be reused..
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I'm saying that arbitrary hardware requirements do not have any relation to how well something actually does its job, and the examples you gave are ridiculously off-base for an embedded system in the first place.
As an example, ATMs get new anti-counterfeiting devices all the time (certainly often enough to refer to any particular device as "next-gen"), yet they run old operating systems without significant problems. Sure, there's the occasional virus, but the overall rate of infection is far lower than desk
Re: (Score:2)
Re: (Score:2)
Also [embeddingwindows.com] industrial control and monitoring, remote instrumentation and telemetry, smart appliances, and research.
An FTP server probably needs a TCP stack, but it likely doesn't need support for laptop power management. On the other hand, a remote monitor might need to run with a backup battery, but communicate over a serial line. Again, embedded systems involve a lot of choices. The field of embedded machines is enormous, and there is certainly no single OS (and especially no single configuration) that will fi
Re: (Score:2)
Re: (Score:2)
No but it should be running a better OS. No issue at all getting linux into something like that, pretty common in the embedded world already.
Re: (Score:2)
It currently runs Debian, stripped down to about 100 megs, and that's with only removing packages. A friend of mine (who is more familiar with the Linux internals) says that figure can easily be cut in half. The spare hard drive I stuck in the box is 2 GB, so I'm not particularly worried. Text recipes don't take that much space.
The first version I set up actually ran Windows 98, because I had originally written my recipe program in Visual Basic. It has since been translated to a language that causes less pa
Re: (Score:2)
I had originally written my recipe program in Visual Basic.
It takes a brave man to write a sentence like that on slashdot.
Re: (Score:1)
Re: (Score:2)
Why is that? We are talking about an embedded OS not some desktop where you could surf with the thing. Most likely you would simply have a VPN connection to the main server to say process CC info for a purchase. And don't forget we are not talking vanilla Win9X but a stripped down version with only enough files/features to run the single app you are using it for.
So I think you and the rest of the guys here are looking at it the wrong way. You can't judge this by running vanilla win9X on the net because th
Re: (Score:2)
Still no reason to go adding the risks that come with win9x. Lots of better options available.
Re: (Score:2)
Re: (Score:2)
It just goes to show you how rabid the blind hate and fanboyism is here on Slashdot. I mean when you point out the entire OS is fitted onto a 16Mb flash and hardwired to ONLY run a single app and connect to a single address and they STILL think it "is a security risk"?
I'd sure as hell hate to have these guys work on anything at any of my SMBs as they are probably the type that thinks you can just slap any Linux or Mac on the net with no firewall or anything because "its not Windows and is safe from viruses"
Re: (Score:3).
The moment you give someone your binary you've given them your code, just in a harder to read format. Any black-hat that cares will merely read the disassembly. Original source code not required.
-Malloc
Re: (Score:2)
There's a very limited number of people who can actually read large swathes of disassembled code, though, and I believe the majority of that already small number has more interesting things to do than see what makes another antivirus suite tick.
Well, until Kapersky manages to tick one of them off, that is.
Re: (Score:2)
I don't disagree but I think, by the same token, people that can't (or are too lazy to) read the assembly are less likely to have the m4d sk1lls (or attention span) to do something very serious with/to the anti-virus program. But, as you say, once you get into "ticked the general populace off" territory (instead of just "highly-skilled dude working for evil overlord for big$" territory), having the easier-to-read source laying around won't help.
Re:And, in other news... (Score:4, Insightful)
Here's the thing.
The people who write malware already have this code. They might not have the C source, but they've got a good handle on the IO flow and undoubtedly have it in assembly. Is this a game-changer for the malware writers? Not even remotely. Even if this was the source code for the latest version from 2011, it wouldn't change anything.
"They" have access to the exact same software that we have. They can download Avast! or AVG or Kaspersky or MSE and write the malware to be untraceable under those security suites. Hell, if they really wanted it they could find disgruntled employees or cleaning crews and get access to the repositories for cash monies.
No antivirus software... (Score:1)
Works nowdays anyhow so... i really dont care.
Besides, im on Linux.
Re: (Score:2)
The answer is lots of people. Customers of Kaspersky may suddenly discover themselves infected with malware that sidesteps, disables or otherwise interferes with their AV or firewall software. Other people might receive emails offering "free" and apparently legit Kaspersky software which subsequently holds their machine to ransom, or installs a bot. And everyone else w
Pretty useless now (Score:5, Interesting)
Code to a 4 year old anti virus app, whats that going to be worth? Kaspersky was great until a few years ago. Then one release made my parents older p4 system near unusable. It went from firefox loading in a few seconds to close to 30 seconds. Forums were filled with the same complaints and no real fixes. I changed to Avast and its been great.
Re: (Score:2)
Re: (Score:2)
Buy a faster computer just to run anti-virus?
You windows kids make me laugh.
Re: (Score:2)
I used to be a big fan of Kaspersky, but their 2010 update is a real piece of junk. A failed update should not cause a corrupted database that it can't rollback from. It also should not give up and force you to manually download updates from their support website.
And yet this exact thing kept happening every few months like clockwork until I gave up and dumped it. When it worked, it worked very well, but dang.
Re: (Score:2)
I got hit with something nasty a few years ago, and the first thing it did was disable my CA Antivirus (provided by my ISP) from updating. Lo and behold, there was no way that I could find to manually update CA AV at all. I finally was able to clean the machine using Kaspersky's online virus scanner, and I was sufficently happy with it that I bought the product; I'd be perfectly happy with the occasional manual database download if the alternative was having no way to update the signatures, ever.
Re: (Score:2)
Re: (Score:2)
Avira is also good. But Kaspersky is even better. You should use it with more modern hardware. Otherwise stick with Avast and all is good.
(emphasis mine)
not according to av-comparatives.org. kaspersky has slipped behind quite a bit while avast and avira are still front-runners.
Re: (Score:3)
Open source using companies can be procecuted if the wrong thing slips in.
Closed source companies can't be
See Oracle Vs Google.
G
Re: (Score:2)
Sure they can. Quite common to run strings against binaries to see what you get. The busybox folks have sued more than one closed source vendor.
I just stopped using anti-virus (Score:2)
Re:I just stopped using anti-virus (Score:4, Informative).
Re: (Score:2).
I also use Flash Block
:)
You do make a very good point about flash as is your point that nothing is ever full proof. I felt after having done the "right thing" and getting malware, coupled with Mcafee not even allowing me to uninstall it completely, I was sick of the game and decided to try Brain 1.0.
Re: (Score:1)
Re: (Score:2)
Consider this: the legitimate source's website is hacked, and all its downloads are infected with new malware not yet seen in the wild. This remains unnoticed for several days, during which time the malware has been downloaded by hundreds or even thousands of users. By the time the AV companies get a sample, it's too late for all those downloaders...
Sure these things can happen. But they are very rare. Risk am willing to take over the slow down AV software packages add to my nice clean system
Re:I just stopped using anti-virus (Score:5, Insightful)
But that's not what an AV is for, despite the industry trying to market it as such. Antivirus software is reactionary. The company has to receive an unknown virus and analyze it before they can put the virus in the next definition file update. And any heuristics module included is typically useless against all but the most basic attacks.
AV is at best a catch-all for uncontrolled or uncontrollable situations. Office computers, shared family home machines, etc. that are subject to illogical users' whims would benefit from AV. But AV cannot stop zero-day exploits, cannot prevent malicious JS, and is completely useless against a determined attacker with physical access to a machine.
Proper computer security addresses each attack vector separately. A properly-configured software firewall will take care of most of the threats though the network. In fact, hiding behind a NAT will take care of 99% of the zero-day threats; whitelisting outbound traffic is just good security practice. Noscript and safe surfing habits will guard against anything coming in through the browser. Obviously, preventing unauthorized physical access to the system requires physical security.
All AV will do is maybe stop that infected autorun from your kid's buddy's flash drive, or delete that exe file you accidentially downloaded from a questionable site you were surfing. But that's what's it's really there for:all the cases you don't really know or expect to have to guard against.
Re: (Score:2)
Not recommended.
A bunch of malware nowadays appears on:
1. Hacked Websites
2. Advertising
Yeah, if you disable JavaScript and Flash you might have a 'safe experience'. But then if your favourite news website gets hacked, you'll catch a virus.
Its not worth it , truly. Or, your flash drive might get infected from someone (there was a printing bureau which actually had this sort of worm on their pcs - infected tons of people).
Re: (Score:2)
So... how much do you trust that flash plugin you got? How about silverlight?
And McAfee is really quite mediocre as AVs go. Avast | AVG | MSSE are all far better.
Re: (Score:2)
Well, if your assertion is correct, then wouldn't the 4 year code be worth quite a lot? Seeing as it is a better version before it went downhill?
Re: (Score:2)
I know it's never likely to be popular on these message boards, but I've actually been having a good experience with Microsoft Security Essentials on the one machine I've tried it on. I've got other machines with AVG Free and avast! on, and MSE has come across relatively simple and light-weight. I'm told it has reviewed pretty well in AV testing too.
Not that I have any complaints from any of the main free AV programmes I've used, but it's nice to see another decent option in the line up.
Re: (Score:2)
Or maybe use a better OS. Upgrading a PC just for antivirus is a hilarious concept.
Pay developers more! (Score:5, Funny)
Re:Pay developers more! (Score:4, Funny)
Perhaps he has only misplaced his gruntle, and is not fully disgruntled.
Re: (Score:2)
it clearly states he was disgruntled. I therefore assume he had his gruntle stolen and that's why he went and stole the code off them. you know, in a "you take my gruntle I'll take your code" kind of way...
Re: (Score:1)
Re: (Score:2)
In Soviet My House, wife beats me!
Stolen?? (Score:5, Funny)
I wish them luck recovering it so they don't have to rewrite it from scratch.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Why.....if their source code was stolen then they don't have it anymore. If their source code is gone they will have to rewrite it. Unless they recover it somehow.
Get it yet?
Re: (Score:2)
WTS: Sense of humour, stolen from nicholas22. Barely used.
Re: (Score:2)
whoosh
Re: (Score:2)
Reactionary - extreme conservatism or rightism in politics; opposing political or social change.
Re: (Score:2)
I bet now they wish that software could be multiplied easily. If that was only possible, I'd have this great idea where you could create a copy of your software, then store it somewhere safe in case some thief gets in, empties out your servers and makes it away with that big bag with that huge $$ sign on it.
I'll be rich when this finally becomes possible!
Dammit, I should have patented it before posting here...
Re: (Score:3, Insightful)
Re: (Score:1)
Here's another one: Identity theft. Language evolves. Deal with it.
Calling copyright infringement theft is a deliberate attempt to equate infringers with criminals (or the result of having been influenced by same) -- not an accidental evolution of language -- whereas identity theft is, in fact, a crime.
Furthermore, if someone copies your code then at worst you've "lost sales" but at least your program still works. If someone steals your identity, then your identity itself is compromised (in its function as a unique identifier) and your ability to use your identity is redu
Re: (Score:2)
Re: (Score:2).
Re: (Score:2, Informative)
Here's another one: Identity theft. Language evolves. Deal with it.
Heck no... framing bank fraud as "identity theft" puts the onus on the victim instead of where it properly belongs.
The bad news is (Score:1)
Re:The bad news is (Score:5, Funny)
That won't work. The source for Ubuntu has already leaked.
Re: (Score:1)
Re:The bad news is (Score:4, Funny)
Dammit, now Linux is hellish insecure!
Why didn't anyone inform the community? That's so irresponsible!
Re: (Score:2, Insightful)
You know what?
Ubuntu can get viruses just as easily as other OSes. The Apache servers that control botnets aren't running IIS. Wine is a weak point, and Flash is a cross-platform single-point-of-failure. How many times have you blindly added a repository based on what some random untrusted person on the Internet tells you to do? I know I have.
The only reason that it's not as 0wn3d as Windows is that Windows was easy pickings and has huge market share. Now the bad guys are going to focus on smartphones
Re: (Score:1)
Certain people keep saying the only reason there's no such thing as Linux malware is market share.
The fact that applications running on Linux can't alter system files has absolutely nothing to do with it.
Prove it. Release your exploit already.
BTW, Wine is notoriously bad at running malware.
Re: (Score:3)
Drop an executable in ~, change ~/.profile and ~/.bashrc to put those directories first, pwned.
Easy to clean, true, but if you're not looking for it, it's not there. Also defeatable by mounting home noexec but how many user installs do that?
Re: (Score:2)
That's still not an example of modifying system files. So you're dropping an executable in root then running some code to edit some files so you can run the executable. Isn't there some kind of circular reference problem there?
Re: (Score:2)
You can't modify the system files. Notice I said run from ~, not
/.
Arbitrary file write in a browser or plugin or mail client, and you're in, compromise. Granted, just for that user but that's all you need for most personal systems. It's more than good enough for a botnet - you can make connections out and harvest any e-mail addresses / private data from ~.
There's actually an additional hole in *nix that's not present in Windows (or more accurately, Firefox on those systems). You can write a browser plugin
Re: (Score:2)
This is mainly because Wine is notoriously bad at running anything.
Re: (Score:1)
You seem to be confused about how botnets are currently being controlled.
Hint: It's not through Apache.
Re: (Score:2)
Re: (Score:2)
Oh wait, you wanted to be fed.. My bad.
Re: (Score:2)
You don't spend much time on Ubuntu boards, do you?
I've seen questions that make me cringe (after years and years of support, you usually can stomach even questions that eventually lead up to "Are you really, really sure it is plugged in?"), but the people there answer even the tenth identical question with the same stoic patience as the first time.
I can't remember seeing a RTFM or LMGTFY on a Ubuntu board.
Re: (Score:2)
Are you kidding?? I tried to install ubuntu 10.10 today. It crashed twice during install and once after install...
Probably a bad burn... Burn it at an insanely low speed, and verify (I use ImgBurn, generally). I went through this with a Windows 7 a week ago, I burned over 5 DVDs with varying speeds and never got one to actually work. They were from an official source, using an official downloader, even (Digital River, we got the student discount shortly after release, and they lied about sending the actu
Copied, not stolen... (Score:1)
"The source code of an older version of 'Kaspersky Internet Security' has been circulated on the internet. The code was created in late 2007 and was probably copied in early 2008. Names contained in the source indicate that the copied code was probably a beta version of the 2008 software package - the current release is Kaspersky Internet Security 2011. According to a Russian language report by CNews (Google translation), the code was copied by a disgruntled ex-employee. The copier has reportedly
Re: (Score:1)
Everybody here understands exactly what happened. Nobody cares about the semantics. You have contributed nothing.
Re: (Score:2)
Tomayto, tomahto. If it were your credit card number being passed around and being used to buy goat porn, you'd probably tell your credit card company it was stolen. Even if some self-rationalizing freeloader came along and pointed it that it can't be stolen since it's still in your wallet. Semantics, at least in this case, really are unimportant.
Re: (Score:2)
Disgruntled employee steals? (Score:1)
I have a lovely stapler at home.
Like Netscape.... (Score:1)
Someone... (Score:2)
Someone check this out to see the quality of this closed code!
Code quality is often a excuse for commercial software to sell VS OSS, and I am interested on how "higher" the quality of this stuff is.
here is the source code: (Score:5, Funny)
#include <stdio.h>
#include <kaspersky.h>
char make_prog_look_big[1600000];
main()
{
if (detect_cache())
disable_cache();
if (fast_cpu())
set_wait_states(lots);
set_mouse(speed, very_slow);
set_mouse(action, jumpy);
set_mouse(reaction, sometimes);
printf("Please wait, Kaspersky is scanning your computah)\n");
if (system_ok())
crash(to_dos_prompt);
else
system_memory = open("a:\swp0001.swp", O_CREATE);
while(1) {
sleep(5);
scan_a_single_file();
sleep(5);
update_progress_bar();
sleep(5);
if (rand() < 0.9)
crash(complete_system);
}
return(unrecoverable_system);
}
}
Re: (Score:1)
Kaspersky security?? (Score:1)
Re: (Score:2)
Why? You have to balance security with usability - in this case the ability to actually do your job - which fundamentally means you have to trust your developers with your source code.
If you're a larger company you can break your code down and only allow people access to the module they're working on, for smaller to mid sized companies that's not such a viable option; people generally work on whatever bit of code needs working on. I doubt Kaspersky actually employees that many developers.
That's assuming i
Re: (Score:3)
Linux is not inherently more secure. Why would it be?
You might notice now and then that an exploit gets discovered in a Linux program. BIND and sendmail have for some time been the poster child for "yet another Linux security hole". Even BIND 9 has its issues. Now, why BIND and sendmail? Are they so horribly insecure compared to the rest of the system?
No. But compromising them is profitable. Simple as that.
Likewise, finding security holes in Windows is profitable. The average Windows user is less clued than | http://developers.slashdot.org/story/11/01/31/2130253/kaspersky-source-code-in-the-wild | CC-MAIN-2014-42 | refinedweb | 4,080 | 72.87 |
java.lang.Object
org.netlib.lapack.Slarzorg.netlib.lapack.Slarz
public class Slarz
Following is the description from the original Fortran source. For each array argument, the Java version will include an integer offset parameter, so the arguments may not match the description exactly. Contact seymour@cs.utk.edu with any questions.
* .. * * Purpose * ======= * * SLARZ applies. * * * H is a product of k elementary reflectors as returned by STZRZF. * * Arguments * ========= * * SIDE (input) CHARACTER*1 * = 'L': form H * C * = 'R': form C * H * * M (input) INTEGER * The number of rows of the matrix C. * * N (input) INTEGER * The number of columns of the matrix C. * * L (input) INTEGER * The number of entries of the vector V containing * the meaningful part of the Householder vectors. * If SIDE = 'L', M >= L >= 0, if SIDE = 'R', N >= L >= 0. * * V (input) REAL array, dimension (1+(L-1)*abs(INCV)) * The vector v in the representation of H as returned by * STZRZF. V is not used if TAU = 0. * * INCV (input) INTEGER * The increment between elements of v. INCV <> 0. * * TAU (input) REAL * The value tau in the representation of H. * * C (input/output) REAL array, dimension (LDC,N) * On entry, the M-by-N matrix C. * On exit, C is overwritten by the matrix H * C if SIDE = 'L', * or C * H if SIDE = 'R'. * * LDC (input) INTEGER * The leading dimension of the array C. LDC >= max(1,M). * * WORK (workspace) REAL array, dimension * (N) if SIDE = 'L' * or (M) if SIDE = 'R' * * Further Details * =============== * * Based on contributions by * A. Petitet, Computer Science Dept., Univ. of Tenn., Knoxville, USA * * ===================================================================== * * .. Parameters ..
public Slarz()
public static void slarz(java.lang.String side, int m, int n, int l, float[] v, int _v_offset, int incv, float tau, float[] c, int _c_offset, int Ldc, float[] work, int _work_offset) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/Slarz.html | CC-MAIN-2017-51 | refinedweb | 298 | 66.03 |
I am doing a problem that asks me to write a program that uses a structure named CorpData to store the following information on a company division and it asks me to store various sales numbers and division names such as east, west, south, or north.
It then asks me to include a constructor that allows the division name and four quarterly sales amounts to be specified at the time a CorpData variable is created.
The program should create four variables of the structure each representing one of the division. Each variable should be passed in turn to a function that calculates and stores the total sales and average quarterly sales for that division. Once this has been done for each division, each variable should be passed in turn to a function that displays the division name, total sales, and quarterly average
I cant figure out how I can get the sales numbers in the right division. This is what I have so far. Any help would be very welcome.
Code:#include <iostream> #include <string> using namespace std; struct CorpData { string name double firstsales, secondsales, thirdsales, fourthsales, annualsales, averagesales; CorpData() { east = ""; west = ""; north = ""; south = ""; } }; int main() { CorpData east,west,north,south; cout << "Enter the division name: "; cin >> ; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/99033-help-structures.html | CC-MAIN-2014-23 | refinedweb | 209 | 53.95 |
The Next Ruby
The Next Ruby
In this article, the author suggests several ways to improve the Ruby language.
Join the DZone community and get the full member experience.Join For Free
Ruby Convergence
Nowadays, every language is trying to be more and more like Ruby. What I find most remarkable is that features of Perl/Lisp/Smalltalk which Ruby accepted are now spreading like wildfire, and features of Perl/Lisp/Smalltalk which Ruby rejected got nowhere.
Here
You could make a far longer list like that, and correlation is very strong.
By using Ruby you're essentially using future technology.
That Was 20 Years Ago!
A downside of having a popular language like Ruby is that you can't really introduce major backwards-incompatible changes. The Python 3 release was very unsuccessful (released December 2008, today it's about an backward compatibility.
Use Indentation Not End
Herevs }, dropping end:
ary.each do |item| puts item
versus
ary.select{|item| item.price > 100 }.map{|item| item.name.capitalize }.each{|name| puts name }
This distinction is fairly close to contemporary Ruby style anyway.
If you're still not sure, HAML is a Ruby dialect which does just that. And Coffeescript is a Ruby wannabe, which does the same (while going a bit too far in its syntactic hacks perhaps).
Autoload Code
Another pointless thing about Ruby
Here:
module Version STRING = '3.5.0'
With the whole fully qualified name being simply inferred by autoloader from file paths.
The first line is technically inferrable too, but since it's usually something more complex like Class Foo < Bar, it's fine to keep this even when we know we're in foo.rb.
Module Nesting-based Constant Resolution Needs to Die Operators
Every
Ruby slightly incompatible change for code that relied on a previous hacky approach, and it makes method_missing a bit more verbose, but it's worth it, and keyword arguments can help clean up a lot of complex APIs.
Kill
#to_sym /
#to_s Spam
Every codebase I've seen over last few years is polluted by endless #to_sym / #to_s, and hacks like HashWithIndifferentAccess. Just don't.
This means {foo: :bar} syntax needs to be interpreted as {"foo" => "bar"}, and seriously it just should. The only reason to get anywhere close to Symbols should be metaprogramming.
The whole nonsense got even worse than Python's list vs tuples mess.
Method Names Should Not Be Globally
namespaced String
This:
(2.time:hours + 30.time:minutes).time:ago
This would let you teach objects how to respond to as many messages as you want without any risk of global namespace pollution.
Here $1 and friends are some serious hackery:
- $1 and friends are accessing parts of $~ - $1 is $~[1] and so on.
- $~ is just a regular local variable - it is not a global, contrary to $ sigil.
- =~ method sets $~ in caller's context. It can do it because it's hacky C code.
Which unfortunately means it's not possible to write similar methods or extend their functionality without some serious C hacking.:
case s = Scanner(item) when Vector2D @x = s.x @y = s.y when Numerical @x = s.value @y = s.value
And StringScanner class in standard library which needs just a tiny bit extra functionality beyond what String / Regexp provide goes this way.
But even that would still need some kind of convention with regards to creating scanners and matchers - and once you have that, then why not take one extra step and fold =~ into it with shared syntactic sugar?
Let the Useless Parts Go
Here.
Wait, That's Still Ruby!
Yeah, even after all these changes the language is essentially Ruby, and backward incompatibility shouldn't be that much worse than Python 2 vs 3. }} | https://dzone.com/articles/the-next-ruby?fromrel=true | CC-MAIN-2020-24 | refinedweb | 622 | 64.51 |
I cloned my Zeus HTML configuration and renamed it to produce a new filetype config for Microsoft HTA (HTML Applications), which basically are HTML with an additional namespace (<HTA:...>) and different security levels for ActiveScripting.
So, for Zeus it is HTML. But even if I add HTA as filetype for the factory HTML settings I do not get any folding. Still. this would be important, since HTA's are self-contained and so I add the scripting and the styling to this file, which can become very large, therefore making it a whish to fold uninteresting parts out of the way, when I am not working on them.
Do you have any ideas ? I did only find one HTA related post in the forums, and that was by me
| http://www.zeusedit.com/phpBB3/viewtopic.php?p=2881 | CC-MAIN-2017-43 | refinedweb | 129 | 70.02 |
view raw
I'm trying to validate some polygons that are on planes with
is_valid
Too few points in geometry component at or near point
from shapely.geometry import Polygon
poly1 = Polygon([(0,0), (1,1), (1,0)])
print(poly1.is_valid)
# True
# z=1
poly2 = Polygon([(0,0,1), (1,1,1), (1,0,1)])
print(poly2.is_valid)
# True
# x=1
poly3 = Polygon([(1,0,0), (1,1,1), (1,1,0)])
print(poly3.is_valid)
# Too few points in geometry component at or near point 1 0 0
# False
The problem is that
shapely in fact ignores the z coordinate. So, as far as shapely can tell you are building a polygon with the points
[(1,0),(1,1), (1,1)] that aren't enough to build a polygon.
See this other SO question for more information: python-polygon-does-not-close-shapely.
IMHO, shapely shouldn't allow three dimension coordinates, because it brings this kind of confusions. | https://codedump.io/share/gvsl528eBbna/1/shapely-isvalid-for-polygons-in-3d | CC-MAIN-2017-22 | refinedweb | 158 | 56.35 |
K-nearest neighbors (KNN) algorithm is a type of supervised ML algorithm which can be used for both classification as well as regression predictive problems. However, it is mainly used for classification predictive problems in industry. The following two properties would define KNN well −
Lazy learning algorithm − KNN is a lazy learning algorithm because it does not have a specialized training phase and uses all the data for training while classification.
Non-parametric learning algorithm − KNN is also a non-parametric learning algorithm because it doesn’t assume anything about the underlying data.
K-nearest neighbors (KNN) algorithm uses ‘feature similarity’ to predict the values of new datapoints which further means that the new data point will be assigned a value based on how closely it matches the points in the training set. We can understand its working with the help of following steps −
Step 1 − For implementing any algorithm, we need dataset. So during the first step of KNN, we must load the training as well as test data.
Step 2 − Next, we need to choose the value of K i.e. the nearest data points. K can be any integer.
Step 3 − For each point in the test data do the following −
3.1 − Calculate the distance between test data and each row of training data with the help of any of the method namely: Euclidean, Manhattan or Hamming distance. The most commonly used method to calculate distance is Euclidean.
3.2 − Now, based on the distance value, sort them in ascending order.
3.3 − Next, it will choose the top K rows from the sorted array.
3.4 − Now, it will assign a class to the test point based on most frequent class of these rows.
Step 4 − End. We are assuming K = 3 i.e. it would find three nearest data points. It is shown in the next diagram −
We can see in the above diagram the three nearest neighbors of the data point with black dot. Among those three, two of them lies in Red class hence the black dot will also be assigned in red class.
As we know K-nearest neighbors (KNN) algorithm can be used for both classification as well as regression. The following are the recipes in Python to use KNN as classifier as well as regressor −
First, start with importing necessary python packages −
import numpy as np import matplotlib.pyplot as plt
dataset = pd.read_csv(path, names = headernames) dataset.head()
Data Preprocessing will be done with the help of following script lines.
X = dataset.iloc[:, :-1].values y = dataset.iloc[:, 4].values
Next, we will divide the data into train and test split. Following code will split the dataset into 60% training data and 40% of testing data −
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.40)
Next, data scaling will be done as follows −
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test)
Next, train the model with the help of KNeighborsClassifier class of sklearn as follows −
from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier(n_neighbors = 8) classifier.fit(X_train, y_train)
At last we need to make prediction. It can be done with the help of following script −
y_pred = classifier.predict(X_test)
Next, print the results as follows −
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score result = confusion_matrix(y_test, y_pred) print("Confusion Matrix:") print(result) result1 = classification_report(y_test, y_pred) print("Classification Report:",) print (result1) result2 = accuracy_score(y_test,y_pred) print("Accuracy:",result2)
Confusion Matrix: [[21 0 0] [ 0 16 0] [ 0 7 16]] Classification Report: precision recall f1-score support Iris-setosa 1.00 1.00 1.00 21 Iris-versicolor 0.70 1.00 0.82 16 Iris-virginica 1.00 0.70 0.82 23 micro avg 0.88 0.88 0.88 60 macro avg 0.90 0.90 0.88 60 weighted avg 0.92 0.88 0.88 60 Accuracy: 0.8833333333333333
First, start with importing necessary Python packages −
import numpy as np
data = pd.read_csv(url, names = headernames) array = data.values X = array[:,:2] Y = array[:,2] data.shape output:(150, 5)
Next, import KNeighborsRegressor from sklearn to fit the model −
from sklearn.neighbors import KNeighborsRegressor knnr = KNeighborsRegressor(n_neighbors = 10) knnr.fit(X, y)
At last, we can find the MSE as follows −
print ("The MSE is:",format(np.power(y-knnr.predict(X),2).mean()))
The MSE is: 0.12226666666666669
It is very simple algorithm to understand and interpret.
It is very useful for nonlinear data because there is no assumption about data in this algorithm.
It is a versatile algorithm as we can use it for classification as well as regression.
It has relatively high accuracy but there are much better supervised learning models than KNN.
It is computationally a bit expensive algorithm because it stores all the training data.
High memory storage required as compared to other supervised learning algorithms.
Prediction is slow in case of big N.
It is very sensitive to the scale of data as well as irrelevant features.
The following are some of the areas in which KNN can be applied successfully −
KNN can be used in banking system to predict weather an individual is fit for loan approval? Does that individual have the characteristics similar to the defaulters one?
KNN algorithms can be used to find an individual’s credit rating by comparing with the persons having similar traits.
With the help of KNN algorithms, we can classify a potential voter into various classes like “Will Vote”, “Will not Vote”, “Will Vote to Party ‘Congress’, “Will Vote to Party ‘BJP’.
Other areas in which KNN algorithm can be used are Speech Recognition, Handwriting Detection, Image Recognition and Video Recognition. | https://www.tutorialspoint.com/machine_learning_with_python/machine_learning_with_python_knn_algorithm_finding_nearest_neighbors.htm | CC-MAIN-2021-21 | refinedweb | 956 | 58.18 |
Last Updated on June 23, 2020
Time series datasets may contain trends and seasonality, which may need to be removed prior to modeling.
Trends can result in a varying mean over time, whereas seasonality can result in a changing variance over time, both which define a time series as being non-stationary. Stationary datasets are those that have a stable mean and variance, and are in turn much easier to model.
Differencing is a popular and widely used data transform for making time series data stationary.
In this tutorial, you will discover how to apply the difference operation to your time series data with Python.
After completing this tutorial, you will know:
- The contrast between a stationary and non-stationary time series and how to make a series stationary with a difference transform.
- How to apply the difference transform to remove a linear trend from a series.
- How to apply the difference transform to remove a seasonal signal from a series.
Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
How to Remove Trends and Seasonality with a Difference Transform in Python
Photo by NOAA, some rights reserved.
Tutorial Overview
This tutorial is divided into 4 parts; they are:
- Stationarity
- Difference Transform
- Differencing to Remove Trends
- Differencing to Remove Seasonality
Stationarity
Time.
Making Series Data Stationary
You can check if your time series is stationary by looking at a line plot of the series over time.
Sign of obvious trends, seasonality, or other systematic structures in the series are indicators of a non-stationary series.
A more accurate method would be to use a statistical test, such as the Dickey-Fuller test..
Need help with Deep Learning for Time Series?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Difference Transform
Differencing is a method of transforming a time series dataset.
It can be used to remove the series dependence on time, so-called temporal dependence. This includes structures like trends and seasonality.
Differencing can help stabilize the mean of the time series by removing changes in the level of a time series, and so eliminating (or reducing) trend and seasonality.
— Page 215, Forecasting: principles and practice.
Differencing is performed by subtracting the previous observation from the current observation.
Inverting the process is required when a prediction must be converted back into the original scale.
This process can be reversed by adding the observation at the prior time step to the difference value.
In this way, a series of differences and inverted
Some.
Calculating function below named inverse_difference() inverts the difference operation for a single forecast. It requires that the real observation value for the previous time step also be provided.
Differencing to Remove Trends
In this section, we will look at using the difference transform to remove a trend.
A trend makes a time series non-stationary by increasing the level. This has the effect of varying the mean time series value over time.
The example below applies the difference() function to a contrived dataset with a linearly increasing trend.
Running the example first prints the contrived sequence with a linear trend. Next, the differenced dataset is printed showing the increase by one unit each time step. The length of this sequence is 19 instead of 20 as the difference for the first value in the sequence cannot be calculated as there is no prior value.
Finally, the difference sequence is inverted using the prior values from the original sequence as the primer for each transform.
Differencing to Remove Seasonality
In this section, we will look at using the difference transform to remove seasonality.
Seasonal variation, or seasonality, are cycles that repeat regularly over time.
A repeating pattern within each year is known as seasonal variation, although the term is applied more generally to repeating patterns within any fixed period.
— Page 6, Introductory Time Series with R.
There are many types of seasonality. Some obvious examples include; time of day, daily, weekly, monthly, annually, and so on. As such, identifying whether there is a seasonality component in your time series problem is subjective.
The simplest approach to determining if there is an aspect of seasonality is to plot and review your data, perhaps at different scales and with the addition of trend lines.
The example below applies the difference() function to a contrived seasonal dataset. The dataset includes two cycles of 360 units each.
Running the example first creates and plots the dataset of two cycles of the 360 time step series.
Line plot of a contrived sesonal dataset
Next, the difference transform is applied and the result is plotted. The plot shows 360 zero values with all seasonality signal removed.
In the de-trending example above, differencing was applied with a lag of 1, which means the first value was sacrificed. Here an entire cycle is used for differencing, that is 360 time steps. The result is that the entire first cycle is sacrificed in order to difference the second cycle.
Line plot of the differenced seasonal dataset
Finally, the transform is inverted showing the second cycle with the seasonality restored.
Line plot of the differenced dataset with the inverted difference transform
Further Reading
- Stationary process on Wikipedia
- Seasonal Adjustment on Wikipedia
- How to Check if Time Series Data is Stationary with Python
- How to Difference a Time Series Dataset with Python
- How to Identify and Remove Seasonality from Time Series Data with Python
- Seasonal Persistence Forecasting With Python
Summary
In this tutorial, you discovered the distinction between stationary and non-stationary time series and how to use the difference transform to remove trends and seasonality with Python.
Specifically, you learned:
- The contrast between a stationary and non-stationary time series and how to make a series stationary with a difference transform.
- How to apply the difference transform to remove a linear trend from a series.
- How to apply the difference transform to remove a seasonal signal from a series.
Do you have any questions about making time series stationary?
Ask your questions in the comments and I will do my best to answer.
Excellent article, explains a lot. Just learned about standard deviations so this is a nice compliment to that.
I find it interesting how removing trends and seasonality resembles how some networks (such as convnet) drive towards transforming inputs into (translation) invariant outputs.
On that note, theres a tangentially related article that I thought you might enjoy –
Earl Miller is doing some really fascinating work, and so are you.
Nice!
Incredibly helpful – thanks for this!
Thanks, I’m glad to hear that.
In time series if both trend and seasonality is present then first whether we remove trend and then seasonality or vice varsa to fit the model.
Remove seasonality then trend.
Why does it make a difference which one you remove first?
Because they interact linearly.
How I can remove seasonality without knowing the lag of period?
Why not just find out the structure of the seasonality?
Can you give an example of how you might find the structure or the ideal lag width? If that’s above somewhere I am sorry…
Yes, use a grid search of different lag values and use the value that results in the best performance on your test harness.
Hi Jason!
First, my best wishes for the New Year and thank you again for the helpful post.
I have a question concerning predicting the first order difference and recovering the initial data.
I have a panel data with autocorrelated dependent variable y lagged with 12 time steps with the independent variables X.
After taking the first order difference, I forecast that new dependent variable based on the differentiated independent variables, let’s call these y’ and X’.
y’_{i, t+1} = y_{i, t+1} – y_{i, t}
X’_{i, t} = X_{i, t} – X_{i, t-1}
and
y’_{i, t} = f(X_{i, t-1})
I run my regression model and find an R-squared of 27%.
To recover the prediction of my initial dependent variable my operation is:
\hat{y}'{i, t} = y_{i, t-12} + \sum_{s=t-11}^{t}\hat{y}’_{i, s}
(because I am not supposed to know the real ys for months t-11 through t)
My R-squared drops to -8% while the R-squared for \sum_{s=t-11}^{t}\hat{y}’_{i, s} is around 30%.
I was wondering if you can detect something wrong in that reasoning or if it is something normal.
Also, is there a way to work with non-stationary data and adjust the residual errors in the end to take into account the autocorrelation. I have some other problems in which the dependent variable can be constant for several time steps (a risk measure based on daily observations calculated over a one year time window and the time step is one month).
Thank you!
You can invert the differenced prediction using real or predicted obs, depending on what is available.
Hi Jason,
How to detrend using a scientifically determined trend approximation.
In my case, I have a measure called: GHI, which in simple terms: the amount of sun rays ground receives from the Sun. Used in energy fields of studies especially solar energy.
So, a real GHI measured usings sensors, and a clear_GHI, estimated following The Ineichen and Perez clear sky model.
The trend is very obvious, as you can see here: at least in the location I’m interested in.
First thought, I calculated
ghi / clear_ghi
But this seems very bad, since, values close to zero, at sunrise and sunset, are so fragile to division, as those values might not be so correct.
I think of
clear_ghi – ghi
Thank you so much !
Perhaps you mean seasonal cycle instead of trend?
You can remove the seasonal cycle by seasonal differencing.
Many thanks Jason, your blog is amazing. I will give it a try
Thanks.
Hi Jason,
Thank you for your explination it was very useful.
My q is why we need to remove seasonality and trend..can you summarize the reasons in clear points ?
It makes the problem a lot simpler to model.
The trend and seasonality are the easy parts, so easy that we want to remove them now so we can focus on the hard parts of the problem.
Hi Jason,
Thanks for this great article. I have few doubts and I will try to put them here in words:
1) Most of the practical time series have seasonal patterns and a trend. For making this stationary we need to remove both the seasonality as well as trend. Is it correct?
2) Models like SARIMAX(Seasonal ARIMA) have a parameter ‘d’ for differencing and a seasonal parameter too. So does it mean that the the original time series data can be fed directly to this model and let the ‘d’ difference term remove trend and seasonality parameter take care of the seasonality factor.
3) It gets confusing here because at one point we say to remove both trend and seasonality, and then talk about SARIMAX model which can handle non-stationary data.
I hope you read my confused mind. Looking forward to hear from you.
Best Regards
Chandan
Yes, if the data has trend and seasonality, both should be removed before modeling with a linear algorithm.
Yes, no need to make the data stationary when using SARIMA, as you will specify how to de-trend and de-seasonalize the model as part of the config. You can make it stationary beforehand if you wish.
I hope that helps.
Thanks for replying Sir.
So if I understand this well, we usually remove trend and seasonality by difference(with lag=1 or lag=seasonality), log transforms etc. mainly for two purposes:
– By verifying if the residuals have no pattern and is stationary
– ACF and PACF do not show high variations
And then by looking at the ACF and PACF we choose parameters which we feed to original data series when using SARIMA.
The SARIMA can model the trend and seasonality directly.
I got you Sir. But SARIMA needs to be fed with parameters and I have seen your posts that you use ACF and PACF charts to deduce them.
You can also use grid search:
Hi Jason,
First of all thank you so much for creating these posts!
I was also reading this post of yours () that uses SciKit’s pipelines, and was wondering if there is an estimator for differencing the series and in this case if it would be beneficial or even a good practice to use it in the pipeline.
Kind regards,
João
Good point.
I’ve not seen one, but it would be valuable!
Why do we always prefer stationary data to perform time series analysis?
what is that which refrains us to analyse non- stationary data?
A stationary time series is much simpler to model.
Jason,
Online searches show conflicting results and I would like to know your opinion. If we use MLP, is it necessary to remove trend and seasonality? If yes, then what is the point of actually using ANN, if simple ARIMA (or ARMA) can get the job done?
It is often a good idea and will make the dataset easier to model.
Try with/without for your data and model and compare the results.
Hi Jason,
Can you please help me to find out -How does one invert the differencing after the residual forecast has been made to get back to a forecast including the trend and seasonality that was differenced out? After differencing the original data set 1 time and completing the prediction, how do I invert or reverse the differencing so that I can get the predicted data without differences?
You can invert the difference by adding the removed value and propagating this down the series.
I have an example here you can use:
Hello sir, is differencing the same as sliding window? if not, have any link to learn it ?
No, sliding window is a way to frame a time series problem, differencing is a way to remove trend/seasonality from time series data.
I am not sure if my time series data is has seasonality or trend since it looks kinda like random noise. So I assume the data is stationary already. Does it hurt to perform differencing for stationary data before training, or should I leave differencing out?
Try modeling with an without differencing and compare model error.
Great Article….
1. If our dataset have trend and seasonality but the model perform well on that without removing trend and seasonality. So the question is that should we remove seasonality and trend for that model or not?
2.And plz provide the link of your article of complete project of removing seasonality and trend for model. if any?
If you get better performance by not making the series stationary, then don’t make it stationary. But be sure you are making a fair comparison between the two models.
Yes, see this post:
Excellent piece.
I was wondering though can I apply the same procedure to data that has both seasonality and trend, say stock data for instance?.
Yes – in general. As for stocks, they are not predictable.
hi
I have a time-series data of 65 years. I don’t know the exact periodicity. What should be my approach to remove seasonality from the dataset
Plot the data a few different ways?
Test if it is stationary, and try removing cycles a few diffrent ways and see if it changes the result?
it’s a daily time scale data. It is stationary. Can you suggest some methods for removing the cycles?
Please suggest some literature also if possible.
Seasonal differencing.
Hi Jason,
Thanks for the incredibly useful articles!
I’m developing a time-series forecasting application for 15-minute data. I want to try out differencing (been applying more traditional scaling), but also want to leave out outliers (entire days that do not apply or where the data will corrupt the training), but this then introduces outliers in the differenced set!
How do you suggest I can handle outliers using differencing?
Many Thanks
You’re welcome.
In differencing… hmmm. Probably remove them or replace them with the mean or the last sensible value.
how to difference ts data by using R
Sorry, I don’t have examples of working with time series in R.
Hi Jason!
I have a daily time-scale data series comprising 200 records. In order to plot seasonality, shall i keep freq 1? When i set it 1, i observe a straight line and when i make it 7 then i see an alternating current pattern. Can you please guide, what value of freq should i rely on?
What do you mean by “freq”?
Hi Jason,
Thank you for your excellent article and website. Quite some work you have put into this.
I was just wondering if you would always remove seasonality from your data. Wouldn’t your algorithm get some insight from knowing of this seasonal pattern, when forecasting or am I missing something?
Cheers
Michael
Thanks.
Yes, always remove. It is too simple and distracting to the model. Get the model to predict the hard part of the problem.
so we keep differencing until we notice a linear trend? btw add a grayish border around the input tags to make it visible. I can help with the code if u want.
No, we difference until the data is reported as stationary.
Please explain what is difference between white noise and stationary series …
White noise has no signal. Hopefully the stationary series has some signal in it we can model.
If one variable was differenced in a multivariate forecasting problem, should the variable be inverted back before we use the model to predict?
Why? Sorry, I don’t follow the rationale.
Generally, you can model your problem anyway you wish, discover what works best for your dataset.
I think you would feed the differenced variable to your model, then invert the forecast it provides.
Thanks for all your awesome material. There might be a typo in one of the comments in the first block of code after the title ‘Differencing to Remove Seasonality’ on line 17. It reads..
# define a dataset with a linear trend
but maybe should read say something like
# define a dataset with seasonality
Thanks! Fixed.
Hi Jason, thanks for the valuable info!
I have one question:
If I use either Fourier transform or Wavelet transform instead of differencing transform to denoise my non-stationary data, is it correct to say that I have also make it stationary? or is differencing transform required to make data stationary?
I don’t recall if a FFT makes the series stationary.
Hi Jason,
Thank you for the great post. I have a small question if you can assist me with.
The time-series I am working with clearly has seasonal variation, but the ADF test shows that the data is indeed stationary. so that means I can start with d=0 and D=0?
Please assist!
Thank you
Perhaps try modeling with the data as is and with the data after seasonal differencing or with an SARIMA and compare the predictive skill of each.
Thank you for the great article!
I had to differentiate my Series twice in order to become stationary.
I then fed the double-differentiated Serie to my model and gained a forecast.
Now I struggle to transform the forecast back since I have no values for the one-time differentiated series.
I assume it should be possible with a recursive approach but can’t get it right. Do you have any advice?
I got there eventually by taking more input from the original series, so taking the first n elements of the training and working upwards from there.
Well done!
You must perform the operations in reverse order.
If you’re unsure, see this:
Thank you!
You’re welcome.
Hello Jason, thanks for the great article!
I have a small question, should I consider a trend to be present in a typical DC-Motor current consumption plot (current v/s time), like this one
This can help you create plots:
Hi Jason,
If a dataset needs to be differenced. Do you also have too difference transform? Or when would someone need to use transform in addition to differencing the dataset?
Would one just try both differencing & differencing transform to see what results look best?
You can difference manually. Some models will difference for you, like SARIMA/ARIMA.
Hi Jason,
If I used the Python auto ARIMA
pmdarimawould you know if that differences that data automatically? I know it can select different ARIMA models including SARIMA.
import pmdarima as pm
model = pm.auto_arima(need_to_train, seasonal=True)
Im guessing I shouldn’t be differencing the data if using this, would you have any recommendations?
ARIMA has the “d” argument as input that controls differencing.
I don’t know about “auto_arima”, but I would guess it tests different d values.
Hi Jason,
I noticed that in your other post (LSTM) for time series using machine learning that you also differenced the shampoo sales dataset. Is differencing the data good for ML too? I may try that differencing datasets for ML & Arima. For some reason I thought differencing datasets was only for statistic models like VAR, ARIMA, SARIMA and not ML practices…
Yes, it is a good idea to make a time series stationary before modeling, although each model/dataset us different. Test to confirm.
ARIMA/SARIMA will difference for you as part of the model configuration.
Should people difference time series datasets whether its for ML or ARIMA based on same strategies? (IE, augmented dickey fuller P-value)
Yes, although ARIMA will difference for you.
Hello Jason,
Thank you for your very helpful article.
I have a time series dataset with several features : some features are stationary and some features are not stationary.
I guess I only need to difference the non stationary features right ? The stationary features does not have to be modified ?
Thank you !
You’re welcome.
Yes, it is a good idea to make feature stationary prior to modeling. Differencing is a great way to do that.
Hi Jason,
The dataset that I have is an electricity type data set from a building power meter and I can find I can train a decent NN model with including a lot of weather data and also a lot of one hot encoding dummy variables for time-of-week. (day, hour, month number, etc.)
I am experimenting in Python with the Tensorflow Keras library and I know the default during the training process randomly shuffles the data. Is this a No-No for a time series type problem where the random shuffle will take out the seasonality from the data? (stationary/non-stationary) The results shuffling the data really aren’t that bad at a glance but not-randomly shuffling the data the results for MLP NN are poor, like the model doesn’t train well.
I know some other times series forecast methods can include ARIMA, LSTM, etc. but I was curious to inquire if MLP can be used for these purposes too? What I ultimately need is a short term forecast method that can incorporate hourly weather forecast (from a web API) to forecast future hourly building electricity. Any tips greatly appreciated.
Yes, you can use MLP, CNN and LSTM. It requires first converting the data to a supervised learning problem using a sliding window:
Then evaluating models in a way that respects the temporal ordering of obs, called walk-forward validation:
You can see tens of tutorials and my book on this here:
Cool thanks for all the info. So if I used multivariate sliding window for MLP NN, is Ok when training the model that
shuffle_data == True? or should I not shuffle training data…? Thanks so much!
Yes, as long as all data in the training dataset is in the past compared to the test set.
Hi Jason,
I have a few questions on how to de-seasonalize my multivariate data set.
The data contains some categorical and some continuous features. I can see that there is both a weekly and daily cycle in my target column values – how would I perform two different seasonal differences?
An additional complexity is that other features have different seasonal periods – for example, my “temperature” column has a different period than my energy “usage in kwh” column. How would you proceed in that case? Can you simply perform seasonal adjustment independently across features and then join the features all back together when modelling?
Thanks for your articles and your continual answers to questions – I have been referencing them heavily during this project!
You can apply weekly differencing first, plot and if the daily cycle is still obvious, apply daily differencing.
You would apply differencing per variable based on the cycles you observe for that variable.
Thanks for the help Jason! I have another related question – some of my features are things like “day of week”, which is a categorical value. I am a bit confused how to how de-seasonalize such data since it is by nature cyclical – should I just remove the entire feature itself?
No need to difference categorical data, I don’t think the concept makes sense.
In order to find seasonality and forecast:
1_ can we Fourier transfer the series, find freq components, add them, inverse Fourier transfer the added components to have seasonality, subtract seasonality from the series, forecast subtracted series, and add back the seasonality to forecast?
2_ while we can easily have seasonality component from decomposition, why people do other approaches to find it? It is not accurate?
Simple seasonal differencing works great too.
Hello. These posts have been very useful to me.
In my study, I have a multivariate time series, some variables are stationary and others aren’t. When transforming with simple difference ( df.diff() ) the whole data, I question myself: is it bad to transform data already stationary? But transforming just some does not seem reasonable… The other thing is that, after the transform, some variables become flat (almost constant throughout time). Does this mean that I should transform data with for example more lag? yearly?
You’re welcome.
There’s no need to difference data that is already stationary. Not bad, but you are adding unneeded complexity.
Yes, but some of the variables are not stationary. Thus my thought, should I transform the whole dataset when only some variables are not stationary?
Thank you so much
Probably not, if some variables are not stationary, consider making them stationary and compare the performance of the model fit on the raw data directly.
I continually read about stationarity testing in the context of time series forecasting. Do you need to test and account for stationarity only in the context of time series classification? Or is it just a test and see type of scenario? I can’t seem to find any authoritative literature that states the underlying assumptions of classification models that require stationarity.
Stationarity is usually an assumption when we apply ARMA model, for example. In simple regression, we also care about stationarity because of the i.i.d. assumption.
How to get back the original values from the first difference? As I am trying to forecast the future values of commodity price using ARIMA and as the dataset was non-stationary, I took the first difference of the price and trained the model on the first difference. From the predicted output, how to get back the original forecasted price? I am really confused about this part.
Hi Anupam…The following discussion may prove beneficial:
Hello. When i make my data stationary with this method, im having a lot of zeroes and my prediction is being flat (LSTM) but if i run this multiple times my p value is being 0 (rounded probably) and my predictions are being much better. How would you explain that?
Thanks!
Hi McanP…Please review the following for more insight: | https://machinelearningmastery.com/remove-trends-seasonality-difference-transform-python/ | CC-MAIN-2022-27 | refinedweb | 4,719 | 55.64 |
Hello again everyone. I can't seem to decide what would be better in this situation. I want to give back to Dream in Code but I can never think up a good idea, until now. I want to do a tutorial on using the System.Security.Cryptography namespace in C#. But I am not sure how to structure it. Originally I just planned on showing how to use the AES symmetric cipher to encrypt and decrypt. As I explore more and more of the class though I have found other useful features like password generators and crypto hash functions. So to help me help some of you, what do you think I should do?
1.Just do the AES encrypt decrypt.
2.A multi part tutorial like one on hashing and another on symmetric vs public key crypto.
3.Or a rather large one kind of introducing System.Security.Cryptography.
Or if anyone has a specific one please post it here.
Thanks for your help, I can help or teach some people with this.
Giving back to DIC
Page 1 of 1
Tutorial question
2 Replies - 1475 Views - Last Post: 14 February 2010 - 09:22 PM
#1
Giving back to DIC
Posted 14 February 2010 - 07:10 PM
Replies To: Giving back to DIC
#2
Re: Giving back to DIC
Posted 14 February 2010 - 07:52 PM
I think the multi-parter would be good. Start with one on the AES algorithm, explaining how that works. Then for another, talk about the various encryption styles (symmetric, hashing, public-private key) and when is appropriate to implement each.
#3
Re: Giving back to DIC
Posted 14 February 2010 - 09:22 PM
Thank you maco sounds like a good plan.
Page 1 of 1 | http://www.dreamincode.net/forums/topic/156035-giving-back-to-dic/ | CC-MAIN-2016-22 | refinedweb | 293 | 73.47 |
.ui form conversion into C++ header file by UIC
CHALLENGE (problem):
#include "ui_Central.h"
Issues: ui_Central.h: No such file or directory
I am following a tutorial for my first Qt application:
I used the code exactly as it is BUT I renamed
myqtapp
to
Central
in every occurrence.
After about 40 hours straight and downloading many of the various selections from Qt website, I finally discovered that the downloads are based on the compiler to be used. The current download version I am using is qt-enterprise-windows-x86-mingw-w64-492-5.5.1.exe
Please, what do I need to do to correct this error?
Any and all suggestions greatly appreciated.
@TahorSuiJurisBenYAH
Hello,
Could you please provide your .pro file?
Kind regards.
wow, thank you shall do. I just re did the entire tutorial in the original as provided.
how do I attach the files or code?
@TahorSuiJurisBenYAH
Just paste your .pro file as a post here between tree backticks: ```. This will generate a code block and everything should be clearly visible.
Thank you! for your kindness.
it is the same as in the tuturial url
HEADERS = Central.h SOURCES = Central.cpp main.cpp FORMS = Central.ui # install target.path = Central sources.files = $$SOURCES $$HEADERS $$RESOURCES $$FORMS *.pro sources.path = . INSTALLS += target sources
- kshegunov Qt Champions 2016
@TahorSuiJurisBenYAH
Hello,
Is this your whole project file, it seems a bit strange. For example a pro file from a project of mine looks like this:
QT += core gui widgets CONFIG += link_prl TARGET = ageditor TEMPLATE = app win32 { DESTDIR = $$OUT_PWD //< This is because I'm using shadow builds } SOURCES += \ ageditormain.cpp \ ageditorwindow.cpp HEADERS += \ ageditorwindow.h FORMS += \ ageditorwindow.ui
You don't seem to have put your full Qt configuration. Could you add these lines:
QT += core gui widgets TARGET = Central TEMPLATE = app
Rerun qmake and see what you get.
Also this part:
target.path = Central sources.files = $SOURCES $HEADERS $RESOURCES $FORMS *.pro sources.path = . INSTALLS += target sources
I don't really understand.
As a side note, what IDE are you using for development?
Kind regards.
This is the IDE:
Filename: qt-enterprise-windows-x86-mingw-w64-492-5.5.1.exe
Description: Qt Enterprise offline installer for Windows host operating system. The package provides everything you need to start Qt development:
@TahorSuiJurisBenYAH
Hello,
This is the IDE:
Filename: qt-enterprise-windows-x86-mingw-w64-492-5.5.1.exe
Well this is an SDK package that contains the MinGW 65 bit compiler, Qt 5.5.1 and (probably) QtCreator. However you're not bound in any way to the QtCreator IDE, that's why I'm asking. You could be using some other tool to build your projects. In any case, I'd assume you're using the QtCreator, but that notwithstanding, you should still at least add the
QT +=configuration in your project.
Kind regards.
ok how do I do that please
@TahorSuiJurisBenYAH
You just edit your .pro file (double-clicking on it in the project view) and edit it manually. Look inhere I think it'll help you set your project up.
Kind regards.
The whole thing is in Qt Creator. That is not the problem, the problem is why ui_Central.h is not being created.
@TahorSuiJurisBenYAH
Hello,
It should be, provided your form is called
Central.uiand you have the line
FORMS += Central.uiin your project file the
uicis supposed to create
ui_Central.hfor you when
qmakeruns. Maybe there's some problem with the path. Could you check whether the
ui_Central.his created or not in your build folder?
Kind regards. | https://forum.qt.io/topic/63886/ui-form-conversion-into-c-header-file-by-uic | CC-MAIN-2018-05 | refinedweb | 594 | 69.79 |
13.12. [Gatys et al., 2016]. Here, we need two input images, one content image and one style image. We use a neural network to alter the content image so that its style mirrors that of the style image. In Fig. 13.12.1,.
13.12.1. Technique¶
The CNN-based style transfer model is shown in Fig. 13.12.2. Fig. 13.12.2, the pre-trained. 13.12.2 CNN-based style transfer process. Solid lines show the direction of forward propagation and dotted lines show backward propagation.¶
Next, we will perform an experiment to help us better understand the technical details of style transfer.
13.12.2. Reading the Content and Style Images¶
First, we read the content and style images. By printing out the image coordinate axes, we can see that they have different dimensions.
%matplotlib inline import d2l from mxnet import autograd, gluon, image, init, np, npx from mxnet.gluon import nn npx.set_np() d2l.set_figsize((3.5, 2.5)) content_img = image.imread('../img/rainier.jpg') d2l.plt.imshow(content_img.asnumpy());
style_img = image.imread('../img/autumn_oak.jpg') d2l.plt.imshow(style_img.asnumpy());
13.12.
rgb_mean = np.array([0.485, 0.456, 0.406]) rgb_std = np.array([0.229, 0.224, 0.225]) def preprocess(img, image_shape): img = image.imresize(img, *image_shape) img = (img.astype('float32') / 255 - rgb_mean) / rgb_std return np.expand_dims(img.transpose(2, 0, 1), axis=0) def postprocess(img): img = img[0].as_in_context(rgb_std.context) return (img.transpose(1, 2, 0) * rgb_std + rgb_mean).clip(0, 1)
13.12.4. Extracting Features¶
We use the VGG-19 model pre-trained on the ImageNet dataset to extract image features[1].
pretrained_net = gluon Section 7.2,
13.12.5. Defining the Loss Function¶
Next, we will look at the loss function used for style transfer. The loss function includes the content loss, style loss, and total variation loss.
13.12.
def content_loss(Y_hat, Y): return np.square(Y_hat - Y).mean()
13.12 \(\mathbf{X}\), which has \(c\) rows and
\(h \cdot w\) columns. You can think of matrix \(\mathbf{X}\) as
the combination of the \(c\) vectors
\(\mathbf{x}_1, \ldots, \mathbf{x}_c\), which have a length of
\(hw\). Here, the vector \(\mathbf{x}_i\) represents the style
feature of channel \(i\). In the Gram matrix of these vectors
\(\mathbf{X}\mathbf{X}^\top \in \mathbb{R}^{c \times c}\), element
\(x_{ij}\) in row \(i\) column \(j\) is the inner product of
vectors \(\mathbf{x}_i\) and \(\mathbf\).
def gram(X): num_channels, n = X.shape[1], X.size // X.shape[1] X = X.reshape(num_channels, n) return np.
def style_loss(Y_hat, gram_Y): return np.square(gram(Y_hat) - gram_Y).mean()
13.12.
def tv_loss(Y_hat): return 0.5 * (np.abs(Y_hat[:, :, 1:, :] - Y_hat[:, :, :-1, :]).mean() + np.abs(Y_hat[:, :, :, 1:] - Y_hat[:, :, :, :-1]).mean())
13.12.5.4. The Loss Function¶
The loss function for style transfer is the weighted sum of the content loss, style loss, and total variance loss. By adjusting these weight hyper-parameters, we can balance the retained content, transferred style, and noise reduction in the composite image according to their relative importance. = sum(styles_l + contents_l + [tv_l]) return contents_l, styles_l, tv_l, l
13.12.6. Creating and Initializing the Composite Image¶
In style transfer, the composite image is the only variable that needs
to be updated. Therefore, we can define a simple model,
GeneratedImage, and treat the composite image as a model parameter.
In the model, forward computation only returns the model parameter.
13.12.7. Training¶
During model training, we constantly extract the content and style
features of the composite image and calculate the loss function. Recall
our discussion of how synchronization functions force the front end to
wait for computation results in Section 12.2. Because we only
call the
asscalar synchronization function every 50 epochs, the
process may occupy a great deal of memory. Therefore, we call the
waitall synchronization function during every epoch.
def train(X, contents_Y, styles_Y, ctx, lr, num_epochs, lr_decay_epoch): X, styles_Y_gram, trainer = get_inits(X, ctx, lr, styles_Y) animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], legend=['content', 'style', 'TV'], ncols=2, figsize=(7, 2.5)) for epoch in range(1, num_epochs+1): with autograd.record(): contents_Y_hat, styles_Y_hat = extract_features( X, content_layers, style_layers) contents_l, styles_l, tv_l, l = compute_loss( X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram) l.backward() trainer.step(1) npx.waitall() if epoch % lr_decay_epoch == 0: trainer.set_learning_rate(trainer.learning_rate * 0.1) if epoch % 10 == 0: animator.axes[1].imshow(postprocess(X).asnumpy()) animator.add(epoch, [float(sum(contents_l)), float(sum(styles_l)), float(tv_l)]) return X
Next, we start to train the model. First, we set the height and width of the content and style images to 150 by 225 pixels. We use the content image to initialize the composite image.)
As you can see, the composite image retains the scenery and objects of the content image, while introducing the color of the style image. Because the image is relatively small, the details are a bit fuzzy.
To obtain a clearer composite image, we train the model using a larger image size: \(900 \times 600\). We increase the height and width of the image used before by a factor of four and initialize a larger composite image.
image_shape = (900, 600) _, content_Y = get_contents(image_shape, ctx) _, style_Y = get_styles(image_shape, ctx) X = preprocess(postprocess(output) * 255, image_shape) output = train(X, content_Y, style_Y, ctx, 0.01, 300, 100) d2l.plt.imsave('../img/neural-style.png', postprocess(output).asnumpy())
As you can see, each epoch takes more time due to the larger image size. As shown in Fig. 13.12.3, the composite image produced retains more detail due to its larger size. The composite image not only has large blocks of color like the style image, but these blocks even have the subtle texture of brush strokes.
13.12.
13.12.9. Exercises¶
How does the output change when you select different content and style layers?
Adjust the weight hyper-parameters in the loss function. Does the output retain more content or have less noise?
Use different content and style images. Can you create more interesting composite images? | https://d2l.ai/chapter_computer-vision/neural-style.html | CC-MAIN-2019-51 | refinedweb | 1,018 | 59.7 |
Hi, folks, due to lack of enough man power and build machines for 3 mips* port at the same time, I think that now it is time for us to have a talk about dropping mips32eb support now.
mips32eb, named mips, in our namespace, is used by few people now, at least compare with mipsel/mips64el. The reason we keep it till now is 1) some people are still using it. 2) it is the only port 32bit and EB now. In fact I don't know anybody is using Debian's mips32eb port. If you are using it, please tell us. | https://www.mail-archive.com/debian-s390@lists.debian.org/msg04036.html | CC-MAIN-2019-13 | refinedweb | 102 | 88.57 |
01 Variables Introduction
In computing, you need to store and retrieve data. Data is stored in the memory of the computer. To retrieve data, you need to know where the data is stored in the memory. Variables help you access data stored in the memory. You can think of them as placeholders.
Let’s look at an example to understand the concept of variables:
In arithmetic, let’s consider you are given the following equation:
z=x+y; where x=10 and y=5
In the expression z=x+y, you substitute 10 in place of x and 5 in place of y.
z= 10+5, which is 15
Thus, x, y, and z are place holders.
In programming, these place holders are referred to as variables.
In C++ programming, a variable is a name given to a memory location. The value assigned toa variable is stored in the memory allocated to that variable.
A diagrammatic representation of the concept of variables stored in memory is as follows:
Another example, in programming, is of an employee database. An employee database contains data such as, Employee Name, Age, Salary, and Date of Joining and so on. In C++, to store this data, you would declare the following as variables:
- EmployeeName
- Age
- Salary
- DateofJoining
1.1 Declaration Statement
A declaration statement specifies the variable name and its data type. In C++, you must declare a variable before you can use it in your program. Based on the data type of the variable, memory is allocated.
Let’s say you want to add two whole numbers and display the result onthe screen. In this case, you need to store three values – the two numbers that you want to add and their sum. Thus, you need three variables; sum= x+y
In C++, you declare the variables as follows:
int x; // declaration statement
int y; //declaration statement
int sum; //declaration statement
1.1.1 Declaring Multiple Variables
You can declare multiple variables in the same line as follows:
int x, y, sum;
1.2 Assignment Statement
An assignment statement is used to initialize variables. For example,
x=5;
y=7;
Let’s say that for an employee database application, you need to set the basic salary to specific value. In C++, you declare a variable that stores the basic salary, and assign it the required value as follows:
int basicsalary; //declaration Statement
basicsalary = 5000; //assignment statement
Alternatively, you can also write the following:
int basicsalary = 5000;
In the above statement, we have declared a variable and assigned a value to it the same line.
1.2.1 Unassigned Variables
A variable that has been declared but not assigned a value is known as an unassigned variable. If you use such a variable in an expression, then the compiler throws an error.
Let’s look at an example:
int basicsalary;
int bonus;
int totalsalary
basicsalary = 5000;
totalsalary= basicsalary+bonus;
In the above statements, the variable bonus is an unassigned variable, and will throw a compile-time error.
Thus, it’s a good practice to assign values to variables whenever they are declared.
1.3 Variable Scope
In C++, you can declare variables anywhere in the program. The scope of a variable determines for how long the variable exists in the memory of the computer. If a variable doesn’t exist, you obviously can’t use it in your program.
Based on scope, variables are divided into two categories:
1.3.1 Local Variables
A variable that is declared within a function is known as a local variable. Such a variable exists in the memory as long as the function is being executed. Once the execution of the function is complete, the variable no longer exists in the memory.
If you try to reference a local variable outside the function it was declared in, it generates an error.
Let’s look at C++ program in which you create a function that prints your name.
void displayname() { string name; // local variable name = Julia; cout<<”My name is<<name<<endl; } #include <iostream.h> void main() { displayname(); }
The output of the above program is as follows:
My name is Julia
In this the variable name is a local variable because it has been declared within the function displayname.
Now, let’s try to use the variable name in the main function:
void displayname() { string name; // local variable name = Julia; cout<<”My name is<<name<<endl; } #include <iostream.h> void main() { displyname(); cout<<”Access local variable<<name<<endl; }
The cout statement will throw an error. This is because, after the execution of the displayname function, the variable name no longer exists in the memory. Hence, the compiler cannot locate the variable name and throws an error.
1.3.2 Global Variables
A variable that is declared in main() is known as global variable. Such a variable exits throughout the execution of a program. In addition, other functions within main() can reference a global variable.Let’s look at C++ program in which you create a function that prints your name. However,in this program, you will declare the variable name as a global variable.
#include <iostream.h> void main() { string name; name = Julia; void displayname() { cout<<”My name is”<<name<<endl; } cout<<”Accessing global variable”<<name<<endl; }
In the above program, the variable name is a global variable. This is because the variable has been declared in main and not within the displayname() function. The function displayname() can access the variable name. Also the cout statement does not generate any error, because the variable exists in the memory even after the execution of the displayname() function.
1.4 C++ Program Example
A program in C++, to add two integers (whole numbers) is as follows:
void main() { int x; //variable declaration int y; int sum; x=5; //assignment statement y=19; sum= x+y; //adding the values stored and storing the result cout<<sum<<endl; }
Understanding the statements in the above program:
int x;
int y;
int sum;
The above statements are declaration statements.
For more information, see Declaration Statement.
x=5;
y=19;
sum=x+y;
The above statements are assignment statements. The variable sum stores the result of addition. The value stored in sum is 24 (9+15).
cout<<sum<<endl
The cout statement displays the value stored in the variable sum on the screen.
1.5 Variable sizes and the size of operator | http://www.wideskills.com/c-plusplus/introduction-to-c-plusplus-variables | CC-MAIN-2018-09 | refinedweb | 1,065 | 62.48 |
Jetson Nano Developer Kit announced at the 2019 GTC for $99 brings a new rival to the arena of edge computing hardware alongside its more pricy predecessors, Jetson TX1 and TX2. The coming of Jetson Nano gives the company a competitive advantage over other affordable options, to name a few, Movidius neural compute stick, Intel Graphics running OpenVINO and Google edge TPU.
In this post, I will show you how to run a Keras model on the Jetson Nano.
Here is a break down of how to make it happen.
We will do the first step on a development machine since it is computational and resource intensive way beyond what Jetson Nano can handle.
Let's get started.
Follow the official getting started guide to flash the latest SD card image, setup, and boot.
One thing to keep in mind, Jetson Nano doesn't come with WIFI radio as the latest Raspberry Pi does, so it is recommended to have a USB WIFI dongle like this one ready unless you plan to hardwire its ethernet jack instead.
There is a thread on the Nvidia developer forum about official support of TensorFlow on Jetson Nano, here is a quick run down how you can install it.
Start a terminal or SSH to your Jetson Nano, then run those commands.
sudo apt update sudo apt install python3-pip libhdf5-serial-dev hdf5-tools pip3 install --extra-index-url tensorflow-gpu==1.13.1+nv19.3 --user
In case you get into the error below,
Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel
Try run
sudo apt install python3.6-dev
The Python3 might gets updated to a later version in the future. You can always check your version first with python3 --version, and change the previous command accordingly.
It is also helpful to install Jupyter Notebook so you can remotely connect to it from a development machine.
pip3 install jupyter
Also, notice that Python OpenCV version 3.3.1 is already installed which ease a lot of pain from cross compiling. You can verify this by importing the cv2 library from the Python3 command line interface.
Run this step on your development machine with Tensorflow nightly builds which include TF-TRT by default or run on this Colab notebook's free GPU.
First lets loads a Keras model. For this tutorial, we use pre-trained MobileNetV2 came with Keras, feel free to replace it with your custom model when necessary.
import os from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2 as Net model = Net(weights='imagenet') os.makedirs('./model', exist_ok=True) # Save the h5 file to path specified. model.save("./model/model.h5")
Once you have the Keras model save as a single
.h5 file, you can freeze it to a TensorFlow graph for inferencing.
Take notes of the input and output nodes names printed in the output. We will need them when converting
TensorRT inference graph and prediction.
For Keras MobileNetV2 model, they are,
['input_1'] ['Logits/Softmax']._names = [t.op.name for t in model.inputs] output_names = [t.op.name for t in model.outputs] # Prints input and output nodes names, take notes of them. print(input_names, output_names) frozen_graph = freeze_graph(session.graph, session, [out.op.name for out in model.outputs], save_pb_dir=save_pb_dir)
Normally this frozen graph is what you use for deploying. However, it is not optimized to run on Jetson Nano for both speed and resource efficiency wise. That is what TensorRT comes into play, it quantizes the model from FP32 to FP16, effectively reducing the memory consumption. It also fuses layers and tensor together which further optimizes the use of GPU memory and bandwidth. All this come with little or no noticeable reduced accuracy.
And this can be done in a single call,
import tensorflow.contrib.tensorrt as trt trt_graph = trt.create_inference_graph( input_graph_def=frozen_graph, outputs=output_names, max_batch_size=1, max_workspace_size_bytes=1 << 25, precision_mode='FP16', minimum_segment_size=50 )
The result is also a TensorFlow graph but optimized to run on your Jetson Nano with TensorRT. Let's save it as a single
.pb file.
graph_io.write_graph(trt_graph, "./model/", "trt_graph.pb", as_text=False)
Download the TensorRT graph
.pb file either from colab or your local machine into your Jetson Nano. You can use scp/sftp to remotely copy the file. For Windows, you can use WinSCP, for Linux/Mac you can try scp/sftp from the command line.
On your Jetson Nano, start a Jupyter Notebook with command
jupyter notebook --ip=0.0.0.0 where you have saved the downloaded graph file to
./model/trt_graph.pb. The following code will load the TensorRT graph and make it ready for inferencing.
The output and the input names might be different for your choice of Keras model other than the MobileNetV2.
output_names = ['Logits/Softmax'] input_names = ['input_1'] import tensorflow as tf def get_frozen_graph(graph_file): """Read Frozen Graph file from disk.""" with tf.gfile.FastGFile(graph_file, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) return graph_def trt_graph = get_frozen_graph('./model/trt_graph.pb') # Create session and load graph tf_config = tf.ConfigProto() tf_config.gpu_options.allow_growth = True tf_sess = tf.Session(config=tf_config) tf.import_graph_def(trt_graph, name='') # Get graph input size for node in trt_graph.node: if 'input_' in node.name: size = node.attr['shape'].shape image_size = [size.dim[i].size for i in range(1, 4)] break print("image_size: {}".format(image_size)) # input and output tensor names. input_tensor_name = input_names[0] + ":0" output_tensor_name = output_names[0] + ":0" print("input_tensor_name: {}\noutput_tensor_name: {}".format( input_tensor_name, output_tensor_name)) output_tensor = tf_sess.graph.get_tensor_by_name(output_tensor_name)
Now, we can make a prediction with an elephant picture and see if the model gets it correctly.
from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.mobilenet_v2 import preprocess_input, decode_predictions # Optional image to test model prediction. img_path = './data/elephant.jpg' img = image.load_img(img_path, target_size=image_size[:2]) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) feed_dict = { input_tensor_name: x } preds = tf_sess.run(output_tensor, feed_dict) # decode the results into a list of tuples (class, description, probability) # (one such list for each sample in the batch) print('Predicted:', decode_predictions(preds, top=3)[0])
Let's run the inferencing several times and see how fast it can go.
import time times = [] for i in range(20): start_time = time.time() one_prediction = tf_sess.run(output_tensor, feed_dict) delta = (time.time() - start_time) times.append(delta) mean_delta = np.array(times).mean() fps = 1 / mean_delta print('average(sec):{:.2f},fps:{:.2f}'.format(mean_delta, fps))
It got a 27.18 FPS which can be considered prediction in real time. In addition, the Keras model can inference at 60 FPS on Colab's Tesla K80 GPU, which is twice as fast as Jetson Nano, but that is a data center card.
In this tutorial, we walked through how to convert, optimized your Keras image classification model with TensorRT and run inference on the Jetson Nano dev kit. Now, try another Keras ImageNet model or your custom model, connect a USB webcam/ Raspberry Pi camera to it and do a real-time prediction demo, be sure to share your results with us in the comments below.
In the future, we will look into running models for other applications, such as object detection. If you are interested in other affordable edge computing options, check out my previous post, how to run Keras model inference x3 times faster with CPU and Intel OpenVINO also works for Movidius neural compute stick on Linux/Windows and Raspberry Pi. | https://www.dlology.com/blog/how-to-run-keras-model-on-jetson-nano/ | CC-MAIN-2019-51 | refinedweb | 1,230 | 50.63 |
Job scheduling rules can be used to control when your jobs run in relation to other jobs. In particular, scheduling rules allow you to prevent multiple jobs from running concurrently in situations where concurrency can lead to inconsistent results. They also allow you to guarantee the execution order of a series of jobs. The power of scheduling rules is best illustrated by an example. Let's start by defining two jobs that are used to turn a light switch on and off concurrently:
public class LightSwitch { private boolean isOn = false; public boolean isOn() { return isOn; } public void on() { new LightOn().schedule(); } public void off() { new LightOff().schedule(); } class LightOn extends Job { public LightOn() { super("Turning on the light"); } public IStatus run(IProgressMonitor monitor) { System.out.println("Turning the light on"); isOn = true; return Status.OK_STATUS; } } class LightOff extends Job { public LightOff() { super("Turning off the light"); } public IStatus run(IProgressMonitor monitor) { System.out.println("Turning the light off"); isOn = false; return Status.OK_STATUS; } } }
Now we create a simple program that creates a light switch and turns it on and off again:
LightSwitch light = new LightSwitch(); light.on(); light.off(); System.out.println("The light is on? " + switch.isOn());
If we run this little program enough times, we will eventually obtain the following output:
Turning the light off Turning the light on The light is on? true
How can that be? We told the light to turn on and then off, so its final state should be off! The problem is that there was nothing preventing the LightOff job from running at the same time as the LightOn job. So, even though the "on" job was scheduled first, their concurrent execution means that there is no way to predict the exact execution order of the two concurrent jobs. If the LightOff job ends up running before the LightOn job, we get this invalid result. What we need is a way to prevent the two jobs from running concurrently, and that's where scheduling rules come in.
We can fix this example by creating a simple scheduling rule that acts as a mutex (also known as a binary semaphore):
class Mutex implements ISchedulingRule { public boolean isConflicting(ISchedulingRule rule) { return rule == this; } public boolean contains(ISchedulingRule rule) { return rule == this; } }
This rule is then added to the two light switch jobs from our previous example:
public class LightSwitch { final MutextRule rule = new MutexRule(); ... class LightOn extends Job { public LightOn() { super("Turning on the light"); setRule(rule); } ... } class LightOff extends Job { public LightOff() { super("Turning off the light"); setRule(rule); } ... } }
Now, when the two light switch jobs are scheduled, the job infrastructure will call the isConflicting method to compare the scheduling rules of the two jobs. It will notice that the two jobs have conflicting scheduling rules, and will make sure that they run in the correct order. It will also make sure they never run at the same time. Now, if you run the example program a million times, you will always get the same result:
Turning the light on Turning the light off The light is on? false
Rules can also be used independently from jobs as a general locking mechanism. The following example acquires a rule within a try/finally block, preventing other threads and jobs from running with that rule for the duration between invocations of beginRule and endRule.
IJobManager manager = Platform.getJobManager(); try { manager.beginRule(rule, monitor); ... do some work ... } finally { manager.endRule(rule); }
You should exercise extreme caution when acquiring and releasing scheduling rules using such a coding pattern. If you fail to end a rule for which you have called beginRule, you will have locked that rule forever.
Although the job API defines the contract of scheduling rules, it does not actually provide any scheduling rule implementations. Essentially, the generic infrastructure has no way of knowing what sets of jobs are ok to run concurrently. By default, jobs have no scheduling rules, and a scheduled job is executed as fast as a thread can be created to run it.
When a job does have a scheduling rule, the isConflicting method is used to determine if the rule conflicts with the rules of any jobs that are currently running. Thus, your implementation of isConflicting can define exactly when it is safe to execute your job. In our light switch example, the isConflicting implementation simply uses an identity comparison with the provided rule. If another job has the identical rule, they will not be run concurrently. When writing your own scheduling rules, be sure to read and follow the API contract for isConflicting carefully.
If your job has several unrelated constraints, you can compose multiple scheduling rules together using a MultiRule. For example, if your job needs to turn on a light switch, and also write information to a network socket, it can have a rule for the light switch and a rule for write access to the socket, combined into a single rule using the factory method MultiRule.combine.
We have discussed the isConflicting method on ISchedulingRule, but thus far have not mentioned the contains method. This method is used for a fairly specialized application of scheduling rules that many clients will not require. Scheduling rules can be logically composed into hierarchies for controlling access to naturally hierarchical resources. The simplest example to illustrate this concept is a tree-based file system. If an application wants to acquire an exclusive lock on a directory, it typically implies that it also wants exclusive access to the files and sub-directories within that directory. The contains method is used to specify the hierarchical relationship among locks. If you do not need to create hierarchies of locks, you can implement the contains method to simply call isConflicting.
Here is an example of a hierarchical lock for controlling write access to java.io.File handles.
public class FileLock implements ISchedulingRule { private String path; public FileLock(java.io.File file) { this.path = file.getAbsolutePath(); } public boolean contains(ISchedulingRule rule) { if (this == rule) return true; if (rule instanceof FileLock) return ((FileLock)rule).path.startsWith(path); if (rule instanceof MultiRule) { MultiRule multi = (MultiRule) rule; ISchedulingRule[] children = multi.getChildren(); for (int i = 0; i < children.length; i++) if (!contains(children[i])) return false; return true; } return false; } public boolean isConflicting(ISchedulingRule rule) { if (!(rule instanceof FileLock)) return false; String otherPath = ((FileLock)rule).path; return path.startsWith(otherPath) || otherPath.startsWith(path); } }
The contains method comes into play if a thread tries to acquire a second rule when it already owns a rule. To avoid the possibility of deadlock, any given thread can only own one scheduling rule at any given time. If a thread calls beginRule when it already owns a rule, either through a previous call to beginRule or by executing a job with a scheduling rule, the contains method is consulted to see if the two rules are equivalent. If the contains method for the rule that is already owned returns true, the beginRule invocation will succeed. If the contains method returns false an error will occur.
To put this in more concrete terms, say a thread owns our example FileLock rule on the directory at "c:\temp". While it owns this rule, it is only allowed to modify files within that directory subtree. If it tries to modify files in other directories that are not under "c:\temp", it should fail. Thus a scheduling rule is a concrete specification for what a job or thread is allowed or not allowed to do. Violating that specification will result in a runtime exception. In concurrency literature, this technique is known as two-phase locking. In a two-phase locking scheme, a process much specify in advance all locks it will need for a particular task, and is then not allowed to acquire further locks during the operation. Two-phase locking eliminates the hold-and-wait condition that is a prerequisite for circular wait deadlock. Therefore, it is impossible for a system using only scheduling rules as a locking primitive to enter a deadlock. | https://help.eclipse.org/neon/topic/org.eclipse.platform.doc.isv/guide/runtime_jobs_rules.htm | CC-MAIN-2017-17 | refinedweb | 1,337 | 54.02 |
Important Programming Concepts (Even on Embedded Systems) Part II: Immutability immutability on software design can differ quite a bit, depending on the type of computing platform, and especially whether dynamic memory or concurrent memory access are involved.
When I took my first computer science course in college, the instructors extolled the virtues of immutable data. Here’s the message of the Functional
Weenies Wizards: Functions that alter data are flawed and evil, and must be marked (e.g. the exclamation point at the end of
set! in Scheme), like the Mark of Cain or the Mark of the Beast; functions that do not alter data are pure and good. I thought they were kind of extreme. (The FW is the guy in the corner at the party who walks up to you and says “Hey, I’ll bet you didn’t realize you’ve been using monads all along!” And when you hear the word “monad”, that’s your cue to run away. Or at least play the Monad Drinking Game and down another shot of Scotch.)
But in the intervening years, I’ve come around a bit to their view, and have become a Functional Wannabe. Yeah, mutability has its place, but there are some real advantages to striving towards immutability, and again, it really depends on the type of programming environment you’re in: if the cost of leaving the input data untouched and returning new output data is high, you’re often stuck with mutable data; otherwise, you really should make data immutable wherever possible.
This assumes you have a choice. Let’s forget for a moment about how you can make your programs use immutable data, and first just focus on the difference.
The Immutable and the Mutable
Consider two pieces of data, the first immutable and the second mutable.
- The text of the Declaration of Independence
- The real-time contents of a stock portfolio, namely an unordered list of stock symbols, each with the number of shares owned.
Here’s what you can do with the Declaration of Independence, and what you can’t do with the stock portfolio:
Use copies of it rather than the original. If all we care about is the text, we don’t care whether it’s in one area of memory or another. If there are seven areas of our program that need access to the text of the Declaration of Independence, we don’t care if all seven refer to one chunk of memory, or three of them refer to one chunk and the other four another chunk, or if each has its own copy. With the stock portfolio, if we make a copy, the copy is going to be out of date and will not be updated if there are any changes. Any piece of software that wants access to the stock portfolio must get it from its source.
Use the original rather than copies. If no one can change the Declaration of Independence, it’s safe to give out references to it all day. (“Hey bud, wanna see the original copy of the Declaration of Independence? Just go to this bus station locker and look inside; here’s the combination....”) Whereas with the stock portfolio, if we want to give read-only access to other software programs, we have to be paranoid and create a copy, so that we can prevent malicious programs from altering the original.
Share access to it without any worry about the problems of concurrency. We can let a thousand different software routines access the text of the Declaration of Independence concurrently. It doesn’t matter. None of them are going to be able to change it, so there’s no risk of data corruption. The stock portfolio is another story. We can let only one software task at a time modify it, and while that modification is taking place, we can’t allow other software tasks to read it.
Simplify expressions at compile time. In compiled software, the compiler can precompute the number of words in the Declaration of Independence; we don’t have to execute operations at runtime to compute the number of words. But we can’t count the number of stocks in the portfolio at compile time, because it can change.
Cache the result of expressions computed at run-time. We could compute a million different statistics about the Declaration of Independence: the number of times the letter pairs
ou,
ha, or
tyoccur; the list of words in alphabetical order that score less than 12 points in Scrabble; the complete Markov chain transition matrix of two-letter sequences in the Declaration; the MD5 and SHA1 hashes of the full text; the ROT13 translation of the all the words with at least 10 letters… and so on. We can’t possibly compute them all in advance at compile-time, so it’s not worth doing. But if we do happen to compute some summary statistic once, we can save the results for later, because the Declaration of Independence will never change. We can’t do that with the stock portfolio though.
Store it in nonvolatile memory. In embedded microcontrollers, the amount of available program memory might be 5 or 10 times larger than the amount of available RAM. This means that RAM is more precious than program memory. We can store the text of the Declaration of Independence in program memory, leaving that precious RAM for something else. The stock portfolio is mutable, so we can’t store it in program memory.
Use it as a key in an associative array. Associative arrays are ubiquitous in software. They’re just sets of key-value pairs. Use the key to index a value. There are all sorts of data structures to do so efficiently. The only catch is that the keys have to be immutable (or at least comparable and/or hashable in an immutable way), otherwise we may not be able to find the value again.
Programming Language Support for Immutability
In some programming languages there are keywords that specify a degree of immutability. These are extremely useful, but you should be careful that you know exactly what these keywords mean if you do use them. C/C++ have
const and Java has
final, but these have different meanings, and neither one marks an object as truly immutable.
In C and C++, the
const keyword has several usages. When applied to a program variable or function argument, it means that the code given access to that data is not allowed to change it, within the appropriate scope:
int doofus123(const data_t *mydata, data_t *mydata2) { const data_t *mydata3 = mydata2; ... }
Here the variables
mydata and
mydata3 are marked as pointers to
const data. The code within
doofus123() is not allowed to change the data referenced through these pointers. But that doesn’t mean that this data is guaranteed not to change in other ways. The
doofus123() function could change the data through the pointer
mydata2, which points to the same location as
mydata3. And we can’t assume that
mydata and
mydata2 point to different locations (if they do refer to the same data, this is called pointer aliasing), so there’s no guarantee that the data referenced by
mydata won’t change.
In C++, if an object’s method is marked with
const, it means that method cannot modify any of the non-
mutable fields of the object, and that method can be called through a
const * to the object, whereas non-
const methods cannot be called via a
const *:
class Dingus { int data; public: void setData(int x); int getData() const; }; Dingus dingus; const Dingus *pdingus = &dingus; int n = pdingus->getData(); // OK pdingus->setData(33); // illegal: pdingus is a const *
You should learn to use
const! It’s the key to writing software that promises not to change data. The compiler will produce an error to stop you from inadvertently violating this promise, at least if you don’t circumvent the compiler and cast away the
const qualifier. In an embedded system, it also allows you to mark initializer data in such a way that it can be stored in nonvolatile program memory rather than in generally scarcer volatile data memory:
const char author[] = "James Q McGillicuddy";
Neither the
const * concept in C or the
const & concept in C++ (data that the compiler will not allow you to change through
const pointers and references) has an equivalent in Java; the Java
final keyword applies only to the direct contents of a variable itself, and it just means that the Java compiler will not allow you to assign a
final field more than once.
public class Doofus { private int x; public void setX(int newX) { this.x = newX; } } public class Dingus { final private Doofus doofus; public Dingus() { this.doofus = new Doofus(); } public void tweak() { this.doofus = new Doofus(); // illegal: doofus is final this.doofus.setX(23); // OK, we can call any public method } }
Neither language provides a way to mark data as truly immutable, however; we can only hide that data behind an API that prevents cooperating software from modifying the data.
The poster child in Java of immutability, or lack thereof, is the mutable
java.util.Date class. If you want to use
Date objects to record the date and time of your birthday or anniversary or upcoming colonoscopy, and you allow someone else access to those objects, you are giving them a blank check to change the fields of the
Date object. To be safe, you need to make a defensive copy. That’s extra work that would be unnecessary if
Date had immutable fields.
In fact, there’s a whole school of thought that everything should be immutable; the functional programming languages strongly encourage the use of immutable objects. When you want to change some object X, instead of altering its data, you create a whole new object X in its place. This sounds like a lot of extra work — but anytime you think that, go look at the advantages I cited earlier with immutable data.
An Example: The Immutable Toaster
The embedded guys who deal with plain old integer data types in C are probably looking at this and scratching their heads. Objects don’t enter the picture at all, and “new object” seems to allude to dynamic memory allocation, which a lot of us in the embedded world avoid like the plague. So here’s an example; let’s write a program that controls a toaster. It has an update function that’s called 10 times a second. Oh, and we need to modulate the heating element on and off, keeping it on at most half the time, not all the time, otherwise it will be too hot and have a tendency to burn the toast.
enum ToasterState { TS_INACTIVE, TS_TOASTING, TS_READY }; typedef struct TAG_ToasterInfo { ToasterState state; int16_t timer; bool heating_element_on; } ToasterInfo; void toaster_update(ToasterInfo *pti, bool buttonpressed, bool dooropen, int16_t dial) { if (pti->state == TS_INACTIVE) { if (buttonpressed) { pti->state = TS_TOASTING; pti->timer = 0; pti->heating_element_on = true; } } if (pti->state == TS_TOASTING) { if (pti->timer >= dial) { pti->state = TS_READY; pti->heating_element_on = false; } else { ++pti->timer; if (!pti->heating_element_on) pti->heating_element_on = true; if (pti->heating_element_on) pti->heating_element_on = false; } } if (pti->state == TS_READY) { if (dooropen) { pti->state = TS_INACTIVE; } } }
Very simple, you turn the dial to set the toasting time, press the button, the toaster turns on and starts a timer, toggling the heating element on and off, and when the timer exceeds the dial setting then the toaster turns off and waits for you to open the door before it will start again. Not much to it!
Can you find the bugs here? They’re subtle.
One problem is that we read and write
pti->state all over the place in this function, and we muddle its new and old value. By “muddle”, I mean that we update
pti->state at one point in the function, and then we read its value later, when we probably meant to refer to its original value at the start of the function. In this case, the effect is not that critical; it just means the toaster finishes what it’s doing a split second earlier, depending on the order of statements within the function; if we rearrange them, we might get state transitions that occur one iteration later.
The bigger problem is in the heating element toggle code:
if (!pti->heating_element_on) pti->heating_element_on = true; if (pti->heating_element_on) pti->heating_element_on = false;
This turns
heating_element_on to
true, but then immediately afterward, turns it to
false. Again, we are muddling its new and old value.
You might think this is an obvious bug, and no sane person would write code that works that way. Why not just toggle
heating_element_on:
pti->heating_element_on = !pti->heating_element_on;
Well, we could, in this case, and it would be correct. But that’s only because this is a really easy kind of behavior to design. What if we needed to do something more complicated, like run the heating element at 60% duty cycle, or if we needed to control the heating element based on a thermostat?
The problem with changing state variables in place, is that we have plenty of opportunities to write programs with errors, because the same name in a program now refers to different values, which are sensitive to the order in which we do things. Let’s say there’s some other logic in the program that needs to do something at the precise moment when the heating element is on and something else occurs. If this other logic precedes the toggle statement, then
pti->heating_element_on refers to the previous heating element state, whereas if the other logic follows the toggle statement, then
pti->heating_element_on refers to the next heating element state. And that’s a simple case, because there’s only one place in which the
heating_element_on member is modified. In general it’s not that simple; there may be multiple places in the code where state can be modified, and if it occurs in
if statements, then sometimes the state is modified and sometimes it is not. When we see
pti->state or
pti->heating_element_on, we can’t tell if these refer to the value of state variables as they were at the beginning of our update function, or a changed value at the end, or some intermediate transient value in the middle of calculations.
(Things get even uglier if there are complex data structures, like a linked list, where a pointer itself is modified:
plist->element.heating_element_on = true; // Line A plist = plist->next; plist->element.heating_element_on = false; // Line B
Here we have two lines, A and B, which both change mutable state, and the lines look the same, but they refer to two separate pieces of data.)
A better design for our toaster software uses separate data for input and output state:
void toaster_update(const ToasterInfo *pti, ToasterInfo *ptinext, bool buttonpressed, bool dooropen, int16_t dial) { /* default: do what we were doing */ ptinext->state = pti->state; ptinext->timer = pti->timer; /* for safety: default the heating_element_on to false */ ptinext->heating_element_on = false; if (pti->state == TS_INACTIVE) { if (buttonpressed) { ptinext->state = TS_TOASTING; ptinext->timer = 0; ptinext->heating_element_on = true; } } if (pti->state == TS_TOASTING) { if (pti->timer >= dial) { ptinext->state = TS_READY; ptinext->heating_element_on = false; } else { ptinext->timer = pti->timer + 1; if (!pti->heating_element_on) ptinext->heating_element_on = true; if (pti->heating_element_on) ptinext->heating_element_on = false; } } if (pti->state == TS_READY) { if (dooropen) { ptinext->state = TS_INACTIVE; } } }
To use this version of the update function properly, we need to avoid pointer aliasing, so that inside the function, we have complete freedom to change the contents of the next state, while still being able to assume that the contents of the existing state stays unchanged. So we can’t do this:
ToasterInfo tinfo; tinfo.state = TS_INACTIVE; while (true) { /* get inputs here */ toaster_update(&tinfo, &tinfo, buttonpressed, dooropen, dial); /* set output here based on tinfo.heating_element_on */ wait_100_msec(); }
But we could call the update function using a pair of
ToasterInfo states:
ToasterInfo tinfo[2]; ToasterInfo *ptiprev = &tinfo[0]; ToasterInfo *pti = &tinfo[1]; ptiprev->state = TS_INACTIVE; while (true) { /* get inputs here */ toaster_update(ptiprev, pti, buttonpressed, dooropen, dial); /* set output here based on pti->heating_element_on */ /* swap pointers to state variables */ ToasterInfo *ptmp = pti; pti = ptiprev; ptiprev = ptmp; wait_100_msec(); }
Now when we are looking at the
toaster_update() function, we can clearly distinguish the old value of the toaster state from the new value of the toaster state. And because we used
const, we can have the compiler help catch our errors if we try to assign to any of the
pti members instead of
ptinext. Moreover, if we have the program stopped in a debugger, we can see both the previous state and the new state in their entirety, without having to do any clever sleuthing to infer this information.
This creates a little bit of extra work, but one of the key lessons of software design is that you have to make tradeoffs, and often you end up giving up a little bit of performance efficiency to gain improvements in code clarity, modularity, or maintenance.
This kind of approach (e.g.
new_state = f(old_state, other_data)), where we decouple the mutable state update from the computation of new state, is something I call “pseudo-immutability”. It’s not purely immutable, since we are storing a mutable state variable somewhere, but the data looks immutable from the standpoint of computing the new state, and the software design tends to be cleaner as a result.
A Tour of Immutable Data
One key idea when considering immutability is the value type. Values are just ways of interpreting digital bits, and the important point is what they are, not where they are stored.
Let’s say you have a software program that works with a complex number e.g. 4.02 + 7.163j; this is just a way of interpreting two 64-bit IEEE-754 floating-point values, one for the real part, and one for the imaginary part. This number exists independently of where it is stored. In fact, if we want to be purely functional, there is no storage: there are only inputs and outputs.
When we do have state containing a complex number, it happens to be just a transient place to store the 128 bits. The complex numbers themselves are nomads (“Hey, I’ll bet you didn’t realize you’ve been using monads all along!” “No, I said nomads, not monads, you creep!”) traveling from place to place, stopping inside a state variable only for a microsecond. One problem with the average approaches of programming using mutable state, is that this idea of transiency disappears. Instead, we get focused on some variable
z containing a complex number, that lasts for a long time, and we poke and prod at it, and the name
z might refer to an erratically changing series of values.
The alternative to a value type is a reference type, where a critical aspect of the data is where it is stored. If I have persistent mutable state in a software program, like a color palette for a window manager, the data lives somewhere, and unless I want to deal with potentially out-of-date copies, when I work with that color palette, I pass around a pointer to that data. The value of the color palette is a particular fixed choice of colors. But the window manager’s reference to the color palette is a unique container for those values; it lives in one place, and its contents can change. (In C, the term lvalue refers to a storage location, which, if it does not have a
const qualifier, can be used on the left side of an assignment statement; the term rvalue refers to an expression which can be used on the right side of an assignment statement.)
Microarchitecture: Low level immutable data
Perhaps when you program, you picture RAM as kind of like one of those plastic storage containers with lots of little compartments. Locations 0x3186-0x319c contain your name; locations 0x319d-0x31f2 contain your address, and so on. At a low level, this is because RAM retains state by default: as long as we keep it happy, some pattern of bits at a particular location will stay the way they are. With static RAM (SRAM), we just provide a well-conditioned source of power; with dynamic RAM (DRAM), we have to activate refresh circuitry, which used to be part of the overall system design in a computer, but is now an intrinsic part of the memory module itself. In either case, the RAM requires an explicit write step to change its contents. On the other hand, if you look at the registers and data paths inside a processor, they are very functional in nature. Here’s the block diagram of the Microchip PIC16F84A microcontroller, from the datasheet:
The 16F84A has a register file for its data memory. The difference between a register file and SRAM is kind of a gray area that depends on the architecture and implementation. The classical definition of a register file is a small group of registers (one for each address in the register file) with separate input and output data ports; we can think of a register as a tandem group of D flip-flops, which are clocked data storage devices: during a particular clock cycle, the output of a flip-flop is constant, but at the rising edge of the clock signal, the D flip-flop’s input is captured by the flip-flop and propagated to the output during the next cycle. Memory in a D flip-flop is transient and only lasts for 1 clock cycle. If you want the data to persist in value, you have to feed the output back to the input. (Those of you who are more familiar with DSP than digital design can think of a register or a D flip-flop as a unit delay z-1.)
So a register file can be thought of as a bunch of D flip-flops and multiplexers: If you are writing new information to the register file to a particular location, the multiplexer for that register gets its data from the register file’s input port, whereas if you’re not writing to the register file, the multiplexer maintains each register’s value by getting its data from the previous value of the register.
The special-purpose registers of the 16F84A, like the program counter or the W register, are similar, but they have manually-designed data paths:
The W register is an 8-bit register which is always a function of its previous value and some other data. The exact function used depends on the arithmetic-logic unit (ALU). And this pattern is essentially the same as the “pseudo-immutable” approach I have been talking about:
new_state = f(old_state, some_other_data);
This keeps new and old state conceptually separate, as opposed to the average old mutable approach that causes us headaches:
// "state" refers to old values if (some_condition) { state.some_member = state.some_member + something_else; } // now "state" refers to new values if (other_condition) { state.other_member = state.some_member - another_thing; } // now "state" has changed again
So it’s not just the Functional Wizards who are using immutable data; at a low level, many microprocessors and microcontrollers utilize the same approach of separating input and output data.
Immutable Data Structures
Back to the high level of programming: there’s a whole subject of immutable or persistent data structures. The idea is that you don’t change the contents of data; instead, you create a new set of data.
This may seem wasteful and impractical, and in some cases it is. For example, if you have an array X of 1 million integers, and you want to change the 30257th element of X, it takes a lot of work just to create a new array with a new value for element #30257 and copies of all the other elements of X. Then if you don’t need the old array any more, you throw it away. Wasteful… except that general-purpose computers are fast and have lots of memory, and the high-level languages recycle unused memory through garbage collection. In Python, the numpy library uses optimized C code underneath to manipulate arrays, and although mutable data access is supported, there are many numpy methods that are functional in nature and create new copies of data rather than changing state in-place. The advantages usually outweigh the costs; we can design programs with fewer errors, and by separating input and output we can optimize the computation, by using parallel computations or a GPU to speed up functional programming, whereas mutating state prevents many of these optimizations.
But arrays are only one type of data structure.
For most of the usual mutable data structures, there are immutable equivalents. The classic example is the list. A mutable list might rely on a linked list implementation. Immutable lists also use linked lists, but they have that nice property, like any other immutable data structure, that the nodes of an immutable list won’t change. Here’s an example in Java:
public class ConsList<T> { private final T car; private final ConsList<T> cdr; public ConsList(T car, ConsList<T> cdr) { this.car = car; this.cdr = cdr; } public T head() { return this.car; } public ConsList<T> tail() { return this.cdr; } }
Really simple: a list consists of one item (the
head() of the list) and a reference to the rest of the list (the
tail()). The names
cons and
car and
cdr are weird names that date back to old LISP implementations, and perhaps it would have been better to get rid of
cons, and use
first and
rest:
public class ImmutableList<T> { private final T first; private final ImmutableList<T> rest; public ImmutableList(T first, ImmutableList<T> rest) { this.first = first; this.rest = rest; } public T head() { return this.first; } public ImmutableList<T> tail() { return this.rest; } }
If we wanted to store the list (1,2,3,4) we would do it this way:
ImmutableList<Integer> list1 = new ImmutableList<Integer>(1, new ImmutableList<Integer>(2, new ImmutableList<Integer>(3, new ImmutableList<Integer>(4, null))));
We can visualize the list nodes as follows. Each contains a pair of references, one to the list head and the second to the remaining elements, with the end of the list denoted by a link to
null:
Insertion at the beginning is easy:
ImmutableList<Integer> list2 = new ImmutableList<Integer>(5, list1); // this is the list (5,1,2,3,4)
Decapitation (removal of the head) is also easy:
ImmutableList<Integer> list3 = list1.tail(); // this is the list (2,3,4)
While these three lists are conceptually separate, they reuse many list nodes.
In fact, the reason these are called persistent data structures, is because the immutability of the data means that you can keep around references to older versions, and they will be unaffected by the changes to the data structure, since really there are no changes, only new structures made of nodes which may be newly allocated or may be shared with other immutable data.
Other operations (appending, or insertion/removal of interior nodes) are possible but require extra list node copies and take longer to finish. In Java, this type of list is a bit verbose to manipulate; in functional languages like LISP or Haskell, list node concatenation is a primitive operation that is much shorter to write in code.
Other data structures like queues, maps, trees, are also possible to implement using immutable techniques. There has been a great deal of research on efficient implementations of these structures, including so-called “amortized algorithms” for reducing the worst-case execution time for things like binary tree rebalancing, by spreading the occasional long operation into many short operations that extend into the future. It’s just like a home mortgage: instead of having to come up with $399,000 in one fell swoop, we go to the bank and get a 30-year 4.5% mortgage with a monthly payment of $2001. We can keep our worst-case costs per operation low by spreading them out over time.
The catch with this whole technique — whether you use complex amortized algorithms, or a simple list with cons cells — is that you need to use dynamic memory allocation, and you need to have some way of recycling unused data, namely reference counting or garbage collection. And that means that while it works great for high-level languages, in low-level languages like C or even C++, immutable data structures are difficult to manage. In C++ there are possibilities (C++11 introduced
shared_ptr<> to facilitate automatic memory management; before C++11 the Boost libraries can be used for their own shared pointers); you can read about some of them in these articles.
So immutable data structures are probably out of the running in the low-level embedded world, at least until someone invents a technique for automatic memory management, that also satisfies the hard real-time, determinism, and/or memory safety requirements of many applications.
Loops?
We’ve talked about the pseudo-immutable approach of isolating updates of mutable state from the computation of new state:
new_state = f(old_state, some_other_data);
There are other ways of enhancing immutability in software design. One of them involves control flow. Take, for example, the simple for-loop in C:
for (int i = 0; i < L; ++i) { do_something(i); }
Here we have a mutable loop variable
i. C++ uses a similar approach with STL iterators to facilitate iteration over a generic container, by overloading the increment operator
++ and the pointer dereferencing operator
*:
std::vector<Something> mydata = ...; for (std::vector<Something>::iterator it = mydata.begin(); it != mydata.end(); ++it) { do_something(*it); }
Again, a mutable loop variable. And there’s nothing really wrong with it, but if you get a warm happy feeling from functional-style programming, you will not get a warm happy feeling from these programming snippets. The STL use of immutable iterators are a little clumsy (
::const_iterator vs
::iterator), and there’s always a risk with more complex software that you might accidentally put another
++it somewhere by mistake, and unintentionally increment the loop twice when you really meant just to do it once. The compiler won’t catch your mistake, as it allows you to modify the loop variable in the body of the loop, and it has no way of figuring out what you really wanted.
There are two approaches for removing mutable loop variables. One is recursion. LISP and other functional languages encourage the use of recursion rather than looping. Here’s an example in Java that uses the
ImmutableList<T> type I described earlier:
/* Find the sum of a list's elements. * (Let's forget about overflow.) */ int sum(ImmutableList<Integer> list) { if (list == null) { return 0; } else { return list.head() + sum(list.tail()); } }
Simple! To find the sum of the elements of a list, it’s either 0 for an empty list, or it’s the head of the list plus the sum of the remaining elements. No loop.
So what’s the catch? It’s this line:
return list.head() + sum(list.tail());
We have to use recursion, which means that if a list has N elements, we need to use up N stack frames. This imposes a memory cost for the stack, and means that if we are debugging and put a breakpoint somewhere for a 1000-element list, there might be 1000 stack frames for us to scroll through in the debugger. Yuck. But that’s not fair, because we know we can solve the same problem with an iterative approach that only uses 1 stack frame:
int sum(ImmutableList<Integer> list) { int result = 0; ImmutableList<Integer> p = list; while (p != null) { result += p.head(); p = p.tail(); } return result; }
The functional languages have a feature called tail recursion in which the compiler or interpreter is smart enough to recognize recursion that doesn’t require additional stack frames; instead of pushing the return address on the stack and then calling the function again, it sets things up to jump back to the beginning of the function, effectively forming an iterative implementation of a recursive solution. Sometimes the compiler or interpreter is smart enough to do everything on their own; other times it has to be helped by writing code that is amenable to tail recursion. Our example isn’t:
return list.head() + sum(list.tail());
The problem is that we have to first make a recursive call to
sum(), then add the head of the list. Adding the head of the list is a deferred action that blocks us from performing tail recursion, because there’s still something to do after the recursion completes. Here’s a modification:
/* Find the sum of a list's elements. * (Let's forget about overflow.) */ int sum(ImmutableList<Integer> list) { return sum_helper(list, 0); } int sum_helper(ImmutableList<Integer> list, int partial_sum) { if (list == null) { return partial_sum; } else { return sum_helper(list.tail(), partial_sum+list.head()); } }
In this case, we can handle the addition step before we recurse. The recursion is the last thing, therefore we can theoretically just jump back to the beginning of
sum_helper() and not have to waste unnecessary stack frames. But it requires the language implementation to perform tail call optimization, and neither C nor Java appear to guarantee that they will do so, although presumably some C compilers will and maybe Java will in a future version.
The approach I like better is the one used in Python, where instead of recursion, we have a for-loop construct where the iteration is managed within the language:
for item in collection: do_something_with(item)
Here we have to implement
collection in a special way so that the language understands how to iterate over its items. (In Python it’s the iterator protocol.) The language will take care of executing the loop body, with the
item variable bound to each item of the collection in sequence. If Python supported immutable data, we could make the loop variable
item immutable: even though it takes on different items each time around the loop, within each loop iteration it is intended to be constant.
Of course, in our sum example we would still need to use a mutable state variable:
mysum = 0 for item in collection: mysum += item
The purely functional alternatives are to use tail recursion, or higher order functions like reduce() or fold() to combine the elements of a collection. We just have to come up with a user-defined function that specifies how each element is supposed to be combined with the result.
Does it matter? Well, yes and no. The mutable approach in this case (
mysum += item) is really simple, and it’s very clear to the reader what is going on, so there’s no strong reason to use functional techniques. But in more complicated cases, the functional approaches can lead to a more organized approach that helps ensure correctness. Let’s say we wanted to sum the square of all values which are divisible by 3:
mysum = 0 for item in collection: if item % 3 == 0: mysum += item*item
That’s not so bad either… but it’s starting to get more complicated. Functional approaches create a pipeline:
mysum = sum(square_map(div3_filter(collection)))
where we’d have to define functions
square_map that yields a new list with squared values, and
div3_filter yields a new list of only the items which are divisible by 3. We can either write them manually, or use the higher-order functions map() and filter(). In Python we can do this more concisely with list comprehensions or generator expressions:
sum(item*item for item in collection if item % 3 == 0)
If you’re used to comprehensions, this is very clear and concise. If you’re not, it seems weird, and the original combination of
for and
if statements would be better. They’re essentially equivalent. You say po-TAY-to, I say po-TAH-to. The difference is that in the functional approach, I don’t have to keep track of the intermediate state involved in the accumulation of sums. And if that doesn’t matter, then pick whichever approach you and your colleagues like best. But sometimes, handling all the housekeeping of intermediate state can be rather cumbersome.
Perhaps the simplest example of how intermediate state makes programming more complex is the swap operation. In Python we can use destructuring assignment:
a,b = b,a
Simple and clear! If we could only assign to one value at a time, we’d be stuck with using a temporary variable:
tmp = a a = b b = tmp
That’s fairly simple as well, but then I have to stop and think if I did that right, and Doubt enters my mind, and I think that maybe I should have used this logic instead:
tmp = a b = tmp a = b
Hmm. (In case you’re wondering, this second attempt is wrong; the previous code snippet is the correct one.)
In any case, there’s a theme here. To help yourself avoid errors, keep things clear and simple, and removing unnecessary mutable state is often a way to keep things clear and simple.
Shared Memory?
Another reason for considering immutability involves shared memory. With multiple threads or processes, shared memory is a common technique for using state with concurrent processing. But it imposes a cost, namely of using locks or mutexes to ensure exclusive access.
The immutable way of handling shared state is… well, it’s to avoid sharing state at all. No shared state means no locks. And the classic way to get there is to use message passing. Erlang is probably the prime example of a language with built-in message-passing support, and nearly all GUI frameworks use message queues.
For example, if I had three threads in a software application that needed access to a database, rather than using a lock on the database so only one thread could access it at a time, I could allow none of the threads to have direct access to the database, and instead require these threads to interact with a database manager by sending messages to it. The result is that I never have to share mutable data between threads.
The cost of message-passing is maintaining a message queue, but there are some fairly simple algorithms (such as the non-blocking one described by Michael and Scott) which guarantee proper concurrent access to a queue. Also, if we’re going to use message-passing to overcome the need for shared data, we must use value types: we have to send the data directly. We can’t send pointers to data, since that would require the data to be directly addressable by more than one thread or process, and that would require shared data. Any type of reference that can be sent via message passing (like an array index or a lookup key) must be meaningful to both the sending and receiving entity.
Here’s another example: let’s say we had 10 worker threads, and wanted to use each of them to analyze some fragment of data, and boil the analysis down to a number and add it in to a cumulative total. The shared memory approach looks something like this for each worker thread:
partial_sum = compute_partial_sum(data) try: lock(shared_sum_lock) shared_sum += partial_sum finally: unlock(shared_sum_lock)
The message passing approach looks like this:
# Worker thread: partial_sum = compute_partial_sum(data) send_message(main_thread, 'partial_sum', my_thread_id, partial_sum) # Main thread sum = 0 nthreads = 10 while nthreads > 0: (message_tag, thread_id, contents) = message_queue.get(block=True) if message_tag == 'partial_sum': nthreads -= 1 sum += contents
With the shared memory approach, we rely on each of the worker threads to operate correctly. If any of them hangs or crashes before unlocking the shared resource, the whole system can deadlock. With the message-passing approach, we don’t have to worry about this. The only way the whole sum can screw up is if the main thread hangs or crashes; otherwise, the worst outcome is that we might not hear from one of the worker threads, and we can deal with this by rewriting the main thread to time out, and either give up or restart a new worker thread on the missing data.
Pure functional approaches for managing state
There’s no perfect answer for when to use mutable data and when to use immutable data. The extreme case of eliminating all mutable data whenever possible, is something that is used in some functional programming environments. In Haskell, for example, you have to go out of your way to use mutable objects. And that means you’re probably going to have to talk to the guy in the corner about monads. Or talk to one of the other guys, who are less intimidating, about them. There are a lot of good reasons why they are useful, it just takes quite a bit of mental energy to figure out what’s really going on.
return :: a -> Maybe a return x = Just x (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b m >>= g = case m of Nothing -> Nothing Just x -> g x
When you understand them well enough that they’re intuitive for you to use, let me know what you learned.
Wrap-up
Here’s a summary of what we talked about today:
- Using immutable data simplifies software design, removing problems for shared access and concurrency, and removing the need for defensive copies.
- Some programming languages include support for some aspects of immutability, like
constin C/C++ and
finalin Java, but these generally don’t provide full immutability.
- Pseudo-immutable techniques are a good real-world compromise to reduce the chances of human error, by decoupling the state update from the computation of its new value, rather than changing mutable state in place:
new_state = f(old_state, other_data)
- The registers in low-level microarchitectures use the pseudo-immutable approach, with flip-flops forming the propagation from old state to new state.
- Immutable (aka “persistent”) data structures can support the use of immutability to create versions of lists, maps, queues that have the advantages of immutability.
- Functional techniques of recursion and iteration are alternatives to looping with mutable state variables.
- Message-passing leverages immutability to eliminate the concurrency problems of shared memory.
- You can use monads if you want to turn to pure functional techniques. (“Hey, I’ll bet you didn’t realize you’ve been using monads all along!”) Remember, I said you can use monads. I probably won’t, at least not intentionally.
Whew! That was longer than I had intended. The next article is on volatility and is much shorter.
Previous post by Jason Sachs:
Important Programming Concepts (Even on Embedded Systems) Part I: Idempotence
Next post by Jason Sachs:
You Will Make. | https://www.embeddedrelated.com/showarticle/639.php%5D | CC-MAIN-2018-34 | refinedweb | 7,136 | 55.98 |
The “sites” framework¶
Django comes with an optional “sites” framework. It’s a hook for associating objects and functionality to particular websites, and it’s a holding place for the domain names and “verbose” names of your Django-powered sites.
Use it if your single Django installation powers more than one site and you need to differentiate between those sites in some way.
The sites framework is mainly based on a simple model:
- class
models.
Site¶
A model for storing the
domainand
nameattributes of a website.
The
SITE_ID setting specifies the database ID of the
Site object associated with that
particular settings file. If the setting is omitted, the
get_current_site() function will
try to get the current site by comparing the
domain with the host name from
the
request.get_host() method.
How you use this is up to you, but Django uses it in a couple of ways automatically via simple conventions.
Example usage¶
Why would you use sites? It’s best explained through examples.
Associating content with multiple sites¶
The Django-powered sites LJWorld.com and Lawrence.com are operated by the same news organization – the Lawrence Journal-World newspaper in Lawrence, Kansas. LJWorld.com focuses on news, while Lawrence.com focuses on local entertainment. But sometimes editors want to publish an article on both sites.
The naive way of solving the problem would be to require site producers to publish the same story twice: once for LJWorld.com and again for Lawrence.com. But that’s inefficient for site producers, and it’s redundant to store multiple copies of the same story in the database.
The better solution is simple: Both sites use the same article database, and an
article is associated with one or more sites. In Django model terminology,
that’s represented by a
ManyToManyField in the
Article model:
from django.contrib.sites.models import Site from django.db import models class Article(models.Model): headline = models.CharField(max_length=200) # ... sites = models.ManyToManyField(Site)
This accomplishes several things quite nicely:
It lets the site producers edit all content – on both sites – in a single interface (the Django admin).
It means the same story doesn’t have to be published twice in the database; it only has a single record in the database.
It lets the site developers use the same Django view code for both sites. The view code that displays a given story just checks to make sure the requested story is on the current site. It looks something like this:
from django.contrib.sites.shortcuts import get_current_site def article_detail(request, article_id): try: a = Article.objects.get(id=article_id, sites__id=get_current_site(request).id) except Article.DoesNotExist: raise Http404("Article does not exist on this site") # ...
Associating content with a single site¶
Similarly, you can associate a model to the
Site
model in a many-to-one relationship, using
ForeignKey.
For example, if an article is only allowed on a single site, you’d use a model like this:
from django.contrib.sites.models import Site from django.db import models class Article(models.Model): headline = models.CharField(max_length=200) # ... site = models.ForeignKey(Site, on_delete=models.CASCADE)
This has the same benefits as described in the last section.
Hooking into the current site from views¶
You can use the sites framework in your Django views to do particular things based on the site in which the view is being called. For example:
from django.conf import settings def my_view(request): if settings.SITE_ID == 3: # Do something. pass else: # Do something else. pass
Of course, it’s ugly to hard-code the site IDs like that. This sort of hard-coding is best for hackish fixes that you need done quickly. The cleaner way of accomplishing the same thing is to check the current site’s domain:
from django.contrib.sites.shortcuts import get_current_site def my_view(request): current_site = get_current_site(request) if current_site.domain == 'foo.com': # Do something pass else: # Do something else. pass
This has also the advantage of checking if the sites framework is installed,
and return a
RequestSite instance if
it is not.
If you don’t have access to the request object, you can use the
get_current() method of the
Site
model’s manager. You should then ensure that your settings file does contain
the
SITE_ID setting. This example is equivalent to the previous one:
from django.contrib.sites.models import Site def my_function_without_request(): current_site = Site.objects.get_current() if current_site.domain == 'foo.com': # Do something pass else: # Do something else. pass
Getting the current domain for display¶
LJWorld.com and Lawrence.com both have email alert functionality, which lets readers sign up to get notifications when news happens. It’s pretty basic: A reader signs up on a Web form and immediately gets an email saying, “Thanks for your subscription.”
It’d be inefficient and redundant to implement this sign up processing code
twice, so the sites use the same code behind the scenes. But the “thank you for
signing up” notice needs to be different for each site. By using
Site
objects, we can abstract the “thank you” notice to use the values of the
current site’s
name and
domain.
Here’s an example of what the form-handling view looks like:
from django.contrib.sites.shortcuts import get_current_site from django.core.mail import send_mail def register_for_newsletter(request): # Check form values, etc., and subscribe the user. # ... current_site = get_current_site(request) send_mail( 'Thanks for subscribing to %s alerts' % current_site.name, 'Thanks for your subscription. We appreciate it.\n\n-The %s team.' % ( current_site.name, ), 'editor@%s' % current_site.domain, [user.email], ) # ...
On Lawrence.com, this email has the subject line “Thanks for subscribing to lawrence.com alerts.” On LJWorld.com, the email has the subject “Thanks for subscribing to LJWorld.com alerts.” Same goes for the email’s message body.
Note that an even more flexible (but more heavyweight) way of doing this would
be to use Django’s template system. Assuming Lawrence.com and LJWorld.com have
different template directories (
DIRS), you could
simply farm out to the template system like so:
from django.core.mail import send_mail from django.template import loader def register_for_newsletter(request): # Check form values, etc., and subscribe the user. # ... subject = loader.get_template('alerts/subject.txt').render({}) message = loader.get_template('alerts/message.txt').render({}) send_mail(subject, message, 'editor@ljworld.com', [user.email]) # ...
In this case, you’d have to create
subject.txt and
message.txt
template files for both the LJWorld.com and Lawrence.com template directories.
That gives you more flexibility, but it’s also more complex.
It’s a good idea to exploit the
Site
objects as much as possible, to remove unneeded complexity and redundancy.
Getting the current domain for full URLs¶
Django’s
get_absolute_url() convention is nice for getting your objects’
URL without the domain name, but in some cases you might want to display the
full URL – with
and the domain and everything – for an object.
To do this, you can use the sites framework. A simple example:
>>> from django.contrib.sites.models import Site >>> obj = MyModel.objects.get(id=3) >>> obj.get_absolute_url() '/mymodel/objects/3/' >>> Site.objects.get_current().domain 'example.com' >>> ' % (Site.objects.get_current().domain, obj.get_absolute_url()) '
Enabling the sites framework¶
To enable the sites framework, follow these steps:
Add
'django.contrib.sites'to your
INSTALLED_APPSsetting.
Define a
SITE_IDsetting:
SITE_ID = 1
-
django.contrib.sites registers a
post_migrate signal handler which creates a
default site named
example.com with the domain
example.com. This site
will also be created after Django creates the test database. To set the
correct name and domain for your project, you can use a data migration.
In order to serve different sites in production, you’d create a separate
settings file with each
SITE_ID (perhaps importing from a common settings
file to avoid duplicating shared settings) and then specify the appropriate
DJANGO_SETTINGS_MODULE for each site.
Caching the current
Site object¶
As the current site is stored in the database, each call to
Site.objects.get_current() could result in a database query. But Django is a
little cleverer than that: on the first request, the current site is cached, and
any subsequent call returns the cached data instead of hitting the database.
If for any reason you want to force a database query, you can tell Django to
clear the cache using
Site.objects.clear_cache():
# First call; current site fetched from database. current_site = Site.objects.get_current() # ... # Second call; current site fetched from cache. current_site = Site.objects.get_current() # ... # Force a database query for the third call. Site.objects.clear_cache() current_site = Site.objects.get_current()
The
CurrentSiteManager¶
If
Site plays a key role in your
application, consider using the helpful
CurrentSiteManager in your
model(s). It’s a model manager that
automatically filters its queries to include only objects associated
with the current
Site.
The
CurrentSiteManager is only usable when the
SITE_ID
setting is defined in your settings.
Use
CurrentSiteManager by adding it to
your model explicitly. For example:() site = models.ForeignKey(Site, on_delete=models.CASCADE) objects = models.Manager() on_site = CurrentSiteManager()
With this model,
Photo.objects.all() will return all
Photo objects in
the database, but
Photo.on_site.all() will return only the
Photo objects
associated with the current site, according to the
SITE_ID setting.
Put another way, these two statements are equivalent:
Photo.objects.filter(site=settings.SITE_ID) Photo.on_site.all()
How did
CurrentSiteManager
know which field of
Photo was the
Site?. The following model, which has a field called
publish_on,
demonstrates this:() publish_on = models.ForeignKey(Site, on_delete=models.CASCADE) objects = models.Manager() on_site = CurrentSiteManager('publish_on')
If you attempt to use
CurrentSiteManager
and pass a field name that doesn’t exist, Django will raise a
ValueError.
Finally, note that you’ll probably want to keep a normal
(non-site-specific)
Manager on your model, even if you use
CurrentSiteManager. As
explained in the manager documentation, if
you define a manager manually, then Django won’t create the automatic
objects = models.Manager() manager for you. Also note that certain
parts of Django – namely, the Django admin site and generic views –
use whichever manager is defined first in the model, so if you want
your admin site to have access to all objects (not just site-specific
ones), put
objects = models.Manager() in your model, before you
define
CurrentSiteManager.
Site middleware¶
If you often use this pattern:
from django.contrib.sites.models import Site def my_view(request): site = Site.objects.get_current() ...
there is simple way to avoid repetitions. Add
django.contrib.sites.middleware.CurrentSiteMiddleware to
MIDDLEWARE. The middleware sets the
site attribute on every
request object, so you can use
request.site to get the current site.
How Django uses the sites framework¶
Although it’s not required that you use the sites framework, it’s strongly
encouraged, because Django takes advantage of it in a few places. Even if your
Django installation is powering only a single site, you should take the two
seconds to create the site object with your
domain and
name, and point
to its ID in your
SITE_ID setting.
Here’s how Django uses the sites framework:
- In the
redirects framework, each redirect object is associated with a particular site. When Django searches for a redirect, it takes into account the current site.
- In the
flatpages framework, each flatpage is associated with a particular site. When a flatpage is created, you specify its
Site, and the
FlatpageFallbackMiddlewarechecks the current site in retrieving flatpages to display.
- In the
syndication framework, the templates for
titleand
descriptionautomatically have access to a variable
{{ site }}, which is the
Siteobject representing the current site. Also, the hook for providing item URLs will use the
domainfrom the current
Siteobject if you don’t specify a fully-qualified domain.
- In the
authentication framework,
django.contrib.auth.views.LoginViewpasses the current
Sitename to the template as
{{ site_name }}.
- The shortcut view (
django.contrib.contenttypes.views.shortcut) uses the domain of the current
Siteobject when calculating an object’s URL.
- In the admin framework, the “view on site” link uses the current
Siteto work out the domain for the site that it will redirect to.
RequestSite objects¶
Some django.contrib applications take advantage of
the sites framework but are architected in a way that doesn’t require the
sites framework to be installed in your database. (Some people don’t want to,
or just aren’t able to install the extra database table that the sites
framework requires.) For those cases, the framework provides a
django.contrib.sites.requests.RequestSite class, which can be used as
a fallback when the database-backed sites framework is not available.
- class
requests.
RequestSite¶
A class that shares the primary interface of
Site(i.e., it has
domainand
nameattributes) but gets its data from a Django
HttpRequestobject rather than from a database.
__init__(request)¶
Sets the
nameand
domainattributes to the value of
get_host().
A
RequestSite object has a similar
interface to a normal
Site object,
except its
__init__()
method takes an
HttpRequest object. It’s able to deduce
the
domain and
name by looking at the request’s domain. It has
save() and
delete() methods to match the interface of
Site, but the methods raise
NotImplementedError.
get_current_site shortcut¶
Finally, to avoid repetitive fallback code, the framework provides a
django.contrib.sites.shortcuts.get_current_site() function.
shortcuts.
get_current_site(request)¶
A function that checks if
django.contrib.sitesis installed and returns either the current
Siteobject or a
RequestSiteobject based on the request. It looks up the current site based on
request.get_host()if the
SITE_IDsetting is not defined.
Both a domain and a port may be returned by
request.get_host()when the Host header has a port explicitly specified, e.g.
example.com:80. In such cases, if the lookup fails because the host does not match a record in the database, the port is stripped and the lookup is retried with the domain part only. This does not apply to
RequestSitewhich will always use the unmodified host. | https://docs.djangoproject.com/en/2.2/ref/contrib/sites/ | CC-MAIN-2022-21 | refinedweb | 2,310 | 50.63 |
Ticket #4885 (closed Bugs: fixed)
Access violation in set_tss_data at process exit due to invalid assumption about TlsAlloc
Description
We've recently upgraded to Boost 1.44 and have started seeing Access Violations from set_tss_data during process exit under various conditions. We are building with Visual Studio 2008 and are seeing the problems on both 32- and 64-bit architectures.
Here's an example stack trace from a crash:
boost_thread-vc90-mt-1_44.dll!boost::detail::heap_new_impl<boost::detail::tss_data_node,void const * __ptr64 & __ptr64,boost::shared_ptr<boost::detail::tss_cleanup_function> & __ptr64,void * __ptr64 & __ptr64,boost::detail::tss_data_node * __ptr64 & __ptr64>(const void * & a1=, boost::shared_ptr<boost::detail::tss_cleanup_function> & a2={...}, void * & a3=0x00000000003a6c40, boost::detail::tss_data_node * & a4=0x9b0d8d481675c085) Line 208 + 0x20 bytes C++ boost_thread-vc90-mt-1_44.dll!boost::detail::set_tss_data(const void * key=0x000000005d009600, boost::shared_ptr<boost::detail::tss_cleanup_function> * func=0x00000000001efc28, void * tss_data=0x0000000000000000, bool cleanup_existing=true) Line 590 C++ libut.dll!`anonymous namespace'::`dynamic atexit destructor for 'ticTocPrevTotalsVector''() + 0x38 bytes C++ > libut.dll!_CRT_INIT(void * hDllHandle=0x0000000000000001, unsigned long dwReason=0, void * lpreserved=0x0000000000000000) Line 449 C libut.dll!__DllMainCRTStartup(void * hDllHandle=0x000000000038f180, unsigned long dwReason=3757760, void * lpreserved=0x000000005cfa6b48) Line 560 + 0xd bytes C ntdll.dll!0000000077b33801() [Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll] ntdll.dll!0000000077b33610() msvcr90.dll!00000000660a1b8b() test_manager.dll!runTests(int argc=1, char * * argv=0x00000000006a6890) Line 768 + 0x8 bytes C++ pkgtest.exe!main(int argc=0, char * * argv=0x0000024d06b13a83) Line 14 + 0x59 bytes C++ pkgtest.exe!__tmainCRTStartup() Line 586 + 0x19 bytes C kernel32.dll!0000000077a0f56d() ntdll.dll!0000000077b43281()
After some digging, it appears that there is an invalid assumption about TlsAlloc? in thread/src/win32/thread.cpp: namely that it cannot return zero. The tss implementation uses zero as a sentinel value for initialization. As far as I can tell, however, the only "illegal" return value for TlsAlloc? is the constant TLS_OUT_OF_INDEXES, which is defined as -1.
It appears that TlsAlloc? happily returns zero as a valid index when called during process shutdown.
I looked at the solution for #4736 which is on the trunk, but it appears to make the same assumption that TlsAlloc? cannot return zero.
Attachments
Change History
comment:1 Changed 6 years ago by cnewbold
- Owner set to anthonyw
- Component changed from None to thread
comment:2 Changed 6 years ago by martin.ankerl@…
I have got the same problem with boost 1.43 in combination with boost log. Is there any workaround possible for this?
comment:4 Changed 5 years ago by viboes
- Owner changed from anthonyw to viboes
- Status changed from new to assigned
- Milestone changed from To Be Determined to Boost 1.49.0
Changed 5 years ago by viboes
- attachment 4885.patch
added
win32/pthread.cpp
comment:5 Changed 5 years ago by viboes
Please, could someone try this patch? Could you attach an example that shows the issue?
comment:7 Changed 5 years ago by Ulrich Eckhardt <ulrich.eckhardt@…>
In r76752, tls_out_of_index was actually declared as a variable, even though that is actually a constant. Otherwise, the changes there are completely right. I'll attach a patch, hopefully making it clear what I mean and also with a better workaround for the according constant missing in some CE SDKs.
Changed 5 years ago by Ulrich Eckhardt <ulrich.eckhardt@…>
- attachment boost thread ticket4885.patch
added
patch
comment:10 Changed 5 years ago by viboes
- Milestone changed from Boost 1.51.0 to To Be Determined
comment:11 Changed 5 years ago by Jonathan Jones <jonathan.jones@…>
- Cc jonathan.jones@… added
comment:12 Changed 5 years ago by viboes
comment:13 Changed 5 years ago by viboes
- Milestone changed from To Be Determined to Boost 1.51.0
comment:14 Changed 5 years ago by viboes
- Status changed from assigned to closed
- Resolution set to fixed | https://svn.boost.org/trac/boost/ticket/4885 | CC-MAIN-2017-09 | refinedweb | 634 | 50.84 |
----- "Ken McDonell" <kenj at internode.on.net> wrote: > I've been threatening to get this out for sometime now. > > There is no code to back any of this up (yet), it really is a > proposal ... so please let me know if you think this is a good or bad > idea, and holes being picked in the issues covered would be most > welcome, as would better ideas. Looks pretty good to me. One section that seems to be missing is "Changes for pmcd", at least for completeness & to give a more clear description of the PDU exchanges that'd be involved, maybe? There's a misconception: "...update global PMNS and send pmcd a SIGHUP signal". I also thought that was how it works, but that's not what pmdammv actually does. I think that approach is deadlock prone - the signal to pmcd seems to cause a request to the PMDA (I can't remember whether I decoded which request that was now), but the PMDA is blocked in kill(2) and never responds - pmcd ends up terminating it due to the timeout, and (amusingly) also ends up restarting it right away cos it gets a SIGHUP! Perhaps Max can remember more the details of that mystery pmcd->pmda PDU. What MMV actually does, is send PMCD a PM_ERR_NOTREADY and then a PM_ERR_READY pair of error PDUs (see callers of mmv_reload_maybe). The code in src/pmcd/src/pmcd.c HandleReadyAgents() returns "true" when the READY PDU comes in and pmcd reloads the namespace (pmcd.c around line 848). So, one *big* problem with this approach (in addition to the "ugly, error prone" rationale you have already) is that the NOTREADY gets sent back to _clients_ too. Which means that whenever an agent is reconfigured, even if that reconfiguration has nothing to do with the (mmv) request/pmid in question, we end up seeing errors on the client (which confuses pmie rules, which may send spurious and bad status out, and means pmlogger gets no data for that sample - which is annoying if the change had nothing to do with that particular request). cheers. -- Nathan | http://oss.sgi.com/pipermail/pcp/2009-July/000485.html | CC-MAIN-2016-26 | refinedweb | 352 | 68.3 |
#include <hallo.h> * Wouter Verhelst [Fri, Jan 26 2007, 02:15:35PM]: > Package: udev > Severity: normal > > On Sun, Jan 21, 2007 at 06:02:10PM -0500, Dumitrescu, Eduard wrote: > > My computer is connected to my piano through a usb-midi cable. If > > I use my piano while the computer starts up, whether /dev/dsp is > > renamed to /dev/dsp1, > > Is that a problem? If so, how? Software that uses your MIDI-interface > should just work with dsp1, too. Looks like the same design problem I complained in the past with having /dev/cdrw1+ while /dev/cdrw is missing. And I still think that this idea of linking ONLY /dev/functionality0 to /dev/functionality is stupid if the enumeration is not created based on the list of capable devices but just by getting the numbers from the set of related devices (any CD reader, or any alsa-device, etc.). /dev/foo should better be linked to /dev/fooN (N==lowest possible number). Eduard. | https://lists.debian.org/debian-devel/2007/01/msg00910.html | CC-MAIN-2015-48 | refinedweb | 162 | 50.46 |
Export store to Excel
Well, here's an exporter that can export a grid, tree or simply a store to excel. It's a fork from another project, I adapted it to work with ExtJs 4, using the new class system and fixed a couple of bugs.
The Csv exporter isn't implemented and the example and compiled files are not updated (so don't use Exporter-all.js).
It exports all the data in the store loaded at that time. If a grid is used it uses the renderers and column configurations from it.
The download is made through a button and uses datauri, so it doesn't work in older ie versions.
I hope it's useful, if you find bugs or make improvements, just fork it and send me a pull request.
- Join Date
- Mar 2007
- Location
- Melbourne, Australia (aka GMT+10)
- 1,091
- Vote Rating
- 57
nice work. I'll check it out this weekend! :-)Lead Trainer / Sencha Specialist
Community And Learning Systems
Lead Architect
DigitalTickets.net
I did not use the Exporter-all.js file as the author suggested but included all other files:
<script type="text/javascript" src="Base64.js"></script>
<script type="text/javascript" src="Button.js"></script>
<script type="text/javascript" src="Formatter.js"></script>
<script type="text/javascript" src="./excelFormatter/ExcelFormatter.js"></script>
<script type="text/javascript" src="./excelFormatter/Workbook.js"></script>
<script type="text/javascript" src="./excelFormatter/Worksheet.js"></script>
<script type="text/javascript" src="./excelFormatter/Cell.js"></script>
<script type="text/javascript" src="./excelFormatter/Style.js"></script>
<script type="text/javascript" src="Exporter.js"></script>
However, I got the namspace undefined error when loading:
namespace is undefined
if (namespace === from...substring(0, from.length) === from) { ext-all-debug.js (line 3487)
Any idea?
You don't have to include all the files if you have the ext loader activated. It uses the new class method and defines it's dependencies. You only have to include Exporter.js
Hello,
It does not work for me on ext4.
I just add the script in my index:
Code:
<script type="text/javascript" src="app/lib/Exporter-all.js">
Code:
dockedItems:[{ xtype: 'toolbar', dock: 'top', items: [{ text: 'Nouveau ticket', iconCls:'addticket', action: 'newticket' },{ text: 'Prendre en charge', iconCls:'linkticket', action: 'chrg', hidden:true },{ iconCls: 'summary', text: 'Aperçu', enableToggle: true, pressed: true, scope: this, toggleHandler: this.onSummaryToggle },'->',{ text: 'Archiver', iconCls:'datab', action: 'archi' },new Ext.ux.Exporter.Button({ component: this, text : "Download as .xls" })] }],
Thx in advance for your help
Vaucer
The exporter-all.js is not updated, so you have to include Exporter.js
Ran into the same problem as nuskin, but happened for me when I tried just including Exporter.js, when I included all the files, I was able actually get into the button class when adding the button but not sure what I need to set in the buttons config to get it to work. Tried the storeId, tried the store itself, and even tried the grid as the component. Any ideas?
I use it like this. If the exporterebutton is inside a grid or a tree it autodetects the store. You can use it in a grid without creating a new class, just add the 'uses' config and the exporterbutton in the dockedItems.
Code:
Ext.Loader.setConfig({enabled: true}); Ext.Loader.setPath('Ext.ux', '/path-to-ext-ux'); Ext.define('App.view.SomeGrid', { extend: 'Ext.grid.Panel', uses: [ 'Ext.ux.exporter.Exporter' ], initComponent: function() { this.store = "SomeStore"; this.dockedItems = [{ xtype: 'toolbar', dock: 'top', items: [{ xtype: 'exporterbutton' }]; this.columns = [/* YOUR COLUMNS */]; this.callParent(arguments); } });
Thanks that worked except that in the button class setComponent function the columns don't have the .on function off them, seem like a simple array. Also the show and hide events never seemed to get fired. So I set it on the render and used the headerCt instead of columns, which seems to work but when I actually click the download, it navigates to data:application/vnd.ms-excel;base64,PD94bWwgdm....Lots more just like it, but doens't do anything more. As I haven't tried to use the data stuff before not sure what's it's supposed to do. I'm trying to make a simpler version of my grid to show as an example and see if it works, but wanted to see if you had ideas ahead of time, thanks again for any help you can provide.
Code:
Ext.Array.each( this.component.headerCt.items.items, function(col) { col.on( "render", setLink, me); });
Hi all
I have the same problem as teemac, I think.
I get the same result from my app, as I get from the exporter example. When I click the button in both the example and my app, I get a part file from firefox and this only contains some column information plus some other values that doesn't appear in the grid.
I'm using the Exporter.js in my app, so that should be okay.
I made a small change to the Worksheet.js after line 150:
Code:
if(record.fields.get(name)) { var value = record.get(name), type = this.typeMappings[col.type || record.fields.get(name).type.type]; }
Any suggestions?) | https://www.sencha.com/forum/showthread.php?136598-Export-store-to-Excel/page2&p=612614 | CC-MAIN-2015-40 | refinedweb | 866 | 60.41 |
Ahrg! Again my ARM application crashed somewhere and I ended up in a HardFault exception :-(. In my earlier post I used a handler to get information from the processor what happened. But it is painful to add this handler again and again. So I decided to make things easier for me: with a special HardFault Processor Expert component :-).
After adding this HardFault component to my project, it automatically adds an entry to the vector table. So no manual steps are needed: having the component in the project and enabled will do the needed steps.
The component implements two functions: one is the interrupt handler itself:
__attribute__</b>((naked)) void</b> <b>HF1_HardFaultHandler</b>(<b>void</b>) { __asm volatile ( " movs r0,#4 \n" /* load bit mask into R0 */ " movs r1, lr \n" /* load link register into R1 */ " tst r0, r1 \n" /* compare with bitmask */ " beq _MSP \n" /* if bitmask is set: stack pointer is in PSP. Otherwise in MSP */ " mrs r0, psp \n" /* otherwise: stack pointer is in PSP */ " b _GetPC \n" /* go to part which loads the PC */ "_MSP: \n" /* stack pointer is in MSP register */ " mrs r0, msp \n" /* load stack pointer into R0 */ "_GetPC: \n" /* find out where the hard fault happened */ " ldr r1,[r0,#20] \n" /* load program counter into R1. R1 contains address of the next instruction where the hard fault happened */ " b HandlerC \n" /* decode more information. R0 contains pointer to stack frame */ ); HandlerC(0); /* dummy call to suppress compiler warning */ }
The above code will load the stack pointer into R0, while R1 contains the PC right after where the problem occurred.
At the end it calls a function implemented in C (
HandlerC()) which performs more decoding:
/** * This is called from the HardFaultHandler */ static void HandlerC(dword ; /* suppress warnings about unused variables */ (void)stacked_r0; (void)stacked_r1; (void)stacked_r2; (void)stacked_r3; (void)stacked_r12; (void)stacked_lr; (void)stacked_pc; (void)stacked_psr; (void)_CFSR; (void)_HFSR; (void)_DFSR; (void)_AFSR; (void)_BFAR; (void)_MMAR;]); /*") ; /* cause the debugger to stop */ }
It will use the stack pointer in R0 to write the stacked registers into variables for easier inspection.
Automatic Vector Allocation
The component uses a new feature available in CodeWarrior for MCU10.3: it is able to automatically assign a handler to a vector.
%if (CPUfamily = "Kinetis") %- ============================================================================= %- Allocation of interrupt vectors by component. %- ============================================================================= %- %if (defined(PEversionDecimal) && (PEversionDecimal >=0 '1282')) %- this is only supported with MCU 10.3 %- Get interrupts info from CPU database %- Note: this is done only for Kinetis for now. %:tmp = %CPUDB_define_Interrupt_Vectors_info() %- %for vect from InterruptVectors %if %"%'vect'" = 'defaultInt' %if vect = 'ivINT_Hard_Fault' %define_prj %'vect' %'ModuleName'%.%HardFaultHandler %else %- keep PEx default %endif %endif %endfor %- %endif %- MCU 10.3 %- %else %error "this component is only supported for GCC and Kinetis!" %endif %-(CPUfamily = "Kinetis")
As this is specific for MCU10.3 and the Processor Expert version in it, it needs to test against the version number. Additionally it is only supported for GCC and Kinetis, thus the second test.
Usage
Using the component is simple: just add it to the list of components in the project:
The HardFaultHandler() automatically gets assigend to the ARM Cortex HardFault Interrupt.
In case of a hard fault, the debugger will stop on the
bkpt 0x0 instruction. The Variables View shows the stacked registers, and the
stacked_lr shows the address where we are coming from:
Then that link register/PC address can be entered in the Disassembly View to see what where the problem happened:
💡 Keep in mind that the Link Register points to the instruction *after* the problem. And that an odd address bit indicates that the code is executed in Thumb mode.
For the above case, the
blx r3 instruction at address
0x4d8 is causing the problem. And inspecting the stacked_r3 register, it is clear that R3 has 0x0 or a NULL pointer. Calling a function at address 0x0 is not a good idea ;-).
Summary
This new HardFault component makes my life easier: I simply can add it to the project, and if I run into a HardFault exception, I get the information I need to find the problem. The component has parts written in inline assembly, and for now supports ARM GNU gcc for Kinetis. The new component is available from this link.
Es auf Ihrer Website schneit?
I found your technique to split out the interrupts and exceptions to individual handler useful also on a STM32 ARM based MCU I’m working with. Of course I had to create it all by hand and with vi, since there is no ProcessorExpert available.
Yes, it is snowing over here in the Swiss Alps. Nice way to show the regional weather :-).
Pingback: Debugging ARM Cortex-M0+ Hard Fault with MTB Trace | MCU on Eclipse
Here is another interesting post on the topic. Worthwhile reading.
“Cortex-M3 / M4 Hard Fault Handler”
Erich, I have my own design board with a KL04Z8 uC on it. It’s brand new, I’m using the internal OSC. After following your tutorial, I kept receiving this error: Cpu_ivINT_Hard_Fault. What could go wrong if my project uses only the generated code from PE without any modification? Many thanks!!
I suggest that you step through your code (from the startup) to see what code is causing this? I suspect that you are accessing the RTC (realtime clock) without having it powered or clocks enabled. Happened to me in the past.
Pingback: C++ with Kinetis Design Studio | MCU on Eclipse
Pingback: Debugging ARM Cortex-M Hard Faults with GDB Custom Command | MCU on Eclipse
Hi Erich, I added your FSL_USB_Stack component to a MK20DX256VLL10 microcontroller, I was running other tasks on it but I was getting a Hard Fault most of the time, in the CFSR the PRECISERR bit was set but couldn’t find any info on possible causes for that kind of fault, the only way I could avoid it was putting some breakpoints in some places and continuing execution after hitting them, it was very strange. I spent a lot of hours researching the possible causes and learned a lot about the ARM Cortex M4 cpu and fault registers. In the end out of desperation I increased the “Interrupt stack size” from 256 to 512 in the MQX Lite component and that did the trick, I’m sharing this experience here if anyone faces the same or similar problem, your HardFault component was of great great help! I think KDS should have a special View that shows fault reports, such as in Keil () but well.. Thanks for your efforts in this great blog!
Hi Carlos,
thanks for sharing your experience and solution! Yes, in my experience stack overflows are at constant source of application crashes, and usually if things do not go well, I try to increase the stack to see if this solves the problem.
Thanks again!
Hi Erich,
Again a very big thanks for such an awesome tutorial.
I am using FRDM-K22F board. I am using following components.
1. ADC
2. FREE-RTOS
I have very simple task which enables ADC conversion. Once i enable, I call vTaskDelay();. In this function, I am hitting Hard_fault. I used the hardfault component. My observations are as fellows. Probably FreeRTOS has some bug. Or might be I am wrong.
1. LR is pointing to a function in tasks.c file. Funtion: vTaskDelay().
Line: if( uxListRemove( &( pxCurrentTCB->xGenericListItem ) ) == ( UBaseType_t ) 0 )
2. PC is pointing to a fucntion in list.c. Function: uxListRemove();
Line: pxItemToRemove->pxPrevious->pxNext = pxItemToRemove->pxNext;
This basically means that when delay is called, the task tries to remove itself from the ready list and while doing so, it is hitting the hardfault error. I am stuck currently. Can you please help me?
Hi Vishal,
Are you using interrupts? One common programming error is not properly assigning the correct interrupt priorities in relationship to the RTOS.
Have a look at
The other thing is: if weird things happen, try to increase the task stack size. Just in case if your problem is caused by a stack overflow (do you have the stack overflow check hook enabled?).
I hope this helps,
Erich
Hi Erich,
I have the same problem like Vishal, but I am running it on my KL27 MCU.
I am not using interrupts, just like polling from two ADC channels alternatively.
After read the above feedback, I did increased the stack size, but still have the hardfault handler error. In addition, I tried to enable the stack overflow hook, but I am using IAR Workbench and not sure what component i need for it.
Please advise. Thanks a lot!
Gilbert
Hi Gilbert,
it is not easy on a Cortex-M0+ to find out the reason for a hard fault.
About the stack overflow hoo: this is a setting in FreeRTOS_config.h, set the following
configCHECK_FOR_STACK_OVERFLOWto 1.
And make sure your variables on the stack are aligned: on the M0 it gives a hard fault for unaligned access, while the M4 does two access (no hardfault, but slower execution).
I hope this helps,
Erich
Hi Erich,
Thanks for your recommendation.
Greatly appreciate.
Regards,
Gilbert
Hi Erich,
I am really sorry. It was my mistake. I was doing something wrong in my code. It works perfectly fine. Thanks a lot lot lot for such a nice PE component.
Hi, I am facing the same issue, What did you do to correct it?
Not sure what the problem was, but reading it again: could it be that the task was not running in an endless loop? Make sure that the code in the task stays in the task inside a for(;;) {…} and does not leave the function/task.
I hope this helps.
Pingback: How to Add Bluetooth Low Energy (BLE) Connection to ARM Cortex-M | MCU on Eclipse
Getting this error on compile with the hard fault component, any ideas?
Generated_Code/HF1.c:124:(.text.HF1_HardFaultHandler+0x14): relocation truncated to fit: R_ARM_THM_JUMP11 against symbol `HF1_HandlerC’ defined in .text.HF1_HandlerC section in /var/folders/kn/fhk1p9tx2dz52qqq3jcmy7_h0000gn/T//ccAhPPjD.ltrans6.ltrans.o
Never seen that one :-(. It looks like the jump from the handler to the HandlerC function cannot be resolved with a 11bit address somehow? Maybe somehow your linker has placed them to much apart? Is it a compiler error or linker error?
Pingback: Debugging ARM Cortex-M0+ HardFaults | MCU on Eclipse
Hey Erich, this is a most helpful component, thank you. I have a situation where an odd fault is manifesting in a bootloader+application scenario, and the debugger can’t be running (I think). Do you have any thoughts on accessing this diagnostic data without the debugger? I guess I could dump the numbers out a com port by writing to low level component calls, but the MCU is in an ‘exception’ state so I’m not sure how well that would work.
Thanks in advance.
You always should be able to debug things, even with a bootloader. Especially if you have the source code, you could add the symbols of it and will have more information.
Writing things to a COM port is possible, as long as not using any interrupts for this.
Erich I’m back again on a similar issue. I patched in some code to dump the regs captured in HF1_Handler to internal EEPROM (Kinetis). This actually works surprisingly, but occasionally it rebuilds PEx components and nukes the code I added in. I’m not familiar with how to control that part of auto code generation; do you have any tips? Basically I want to #include a .h file with inline code in it (add in a hook, I guess).
Thanks in advance.
You can disable Processor Expert code generation for a component, see
That way your changes are not overwritten.
Perfect, thanks so much Erich!
LikeLiked by 1 person | https://mcuoneclipse.com/2012/12/28/a-processor-expert-component-to-help-with-hard-faults/ | CC-MAIN-2021-49 | refinedweb | 1,955 | 63.49 |
Introduction: Remote Controlled Webcam Using Arduino, SensorMonkey, JQuery and Justin.tv
Web-enable your interactive sensors over desktop, smartphone and tablet devices.
This).
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Gathering Materials
Hardware:
- Arduino (I use an Uno but older boards such as a Duemilanove will work fine)
- USB cable to connect Arduino to host computer
- 2 x servo motors - one for pan and one for tilt (I use Hitec HS-422 Deluxe servos)
- Mounting brackets to construct pan and tilt assembly using servos (I use a Lynx B-Pan/Tilt kit)
- A base of some kind to stabilize your pan and tilt assembly (I use a cheap plastic mini-vice!)
- Webcam (I use a Microsoft LifeCam VX-3000 but any will do as long as it's compatible with your OS)
- External power supply to provide current to servos (I use rechargeable Ni-MH 9V batteries)
- 9V to barrel jack adapter to connect external power supply to Arduino
- Assorted wires to connect pan and tilt assembly to Arduino
- Breadboard
- Arduino development environment ()
- Free account on SensorMonkey.com (login with your existing Facebook account)
- Free account on Justin.tv (for live audio/video streaming)
- Bloom (for Windows users) orSensorMonkeySerialNet (for non-Windows users)
- jQuery UI
The 9V batteries don't last long, so if you want to setup your webcam for an extended period of time you'll need to hook it up to a persistent external power supply such as a wall-mounted DC adapter or a high-power density LiPo battery.
Alternatively, you can power one of the 2 servos using another Arduino connected to a different USB port.
Step 2: Mount Webcam, Connect Arduino and Upload Sketch
First, I build the pan and tilt assembly by attaching the servos to the mounting brackets as described here (if you're using different components, please follow the relevant assembly guide(s) for your particular servos and mounting brackets instead).
Next, I attach the webcam to the top of the tilt servo's mounting bracket. In my case, I simply had to remove (unscrew) the universal attachment base from the bottom of the webcam and screw the device into one of the holes in the mounting bracket. Depending on your webcam, you may need to attach it by some other means (you can always use sticky tape if all else fails!).
To stabilize the whole assembly, I place it into the plastic mini-vice and fix the vice to a flat surface (i.e. the top of my desk). Again, depending on your components, you may have other requirements. As long as the webcam can pan and tilt without falling over that's all that matters.
From here, I wire the servos to the Arduino as shown in the pictures and the circuit diagram (made using Fritzing). The tilt servo is connected to analog pin 0, while the pan servo is connected to analog pin 5. The Arduino is connected to the host computer using the USB cable and powered using the external power supply through the built-in barrel jack adapter.
Finally, to control the servos, I upload the following sketch to the Arduino's microcontroller using the development environment:
#include <Servo.h>
Servo pan, tilt;
void setup() {
pan.attach( A5 );
tilt.attach( A0 );
Serial.begin( 9600 ); // Open the serial port.
}
void loop() {
if( Serial.available() ) {
byte b = Serial.read();
// Map high 4 bits of incoming byte to pan rotation in degrees.
pan.write( map( b >> 4, 0, 15, 0, 180 ) );
delay( 15 );
// Map low 4 bits of incoming byte to tilt rotation in degrees.
tilt.write( map( b & 0x0F, 0, 15, 0, 180 ) );
delay( 15 );
}
}
The sketch is very basic. It opens the serial port and reads bytes one at a time. Each byte is assumed to contain a pan and tilt rotation pair; the high 4 bits are the pan rotation (0 to 15 inclusive) and the low 4 bits are the tilt rotation (0 to 15 inclusive). This gives 16 different levels (i.e. 24) to choose from with respect to each dimension of motion and makes it easy to encode the webcam's position using hexadecimal character pairs. Each servo has a range of 0 to 180 degrees. So, for example, a hexadecimal character pair of 7A means 7/15 x 180 (84 degrees) on the pan axis and 10/15 x 180 (120 degrees) on the tilt axis. A hexadecimal character pair of 00 encodes a 0 degree rotation on both pan and tilt axes, while FF encodes a full 180 degree rotation on both pan and tilt axes. The mapping for each character is shown below:
0 : 0 degrees
1 : 12 degrees
2 : 24 degrees
3 : 36 degrees
4 : 48 degrees
5 : 60 degrees
6 : 72 degrees
7 : 84 degrees
8 : 96 degrees
9 : 108 degrees
A : 120 degrees
B : 132 degrees
C : 144 degrees
D : 156 degrees
E : 168 degrees
F : 180 degrees
If I needed fine-grained control of the servos' motion, I could encode the pan and tilt rotations as separate bytes. In this case, however, using a single byte only is an efficient means of encoding the co-ordinate system for controlling the two servos and provides adequate motion range for a simple webcam.
Step 3: Download and Install Bloom (or SensorMonkeySerialNet)
- constructed a pan and tilt assembly using 2 servo motors
- mounted a webcam to the pan and tilt assembly
- wired the pan and tilt assembly to an Arduino
- connected the Arduino to a host computer over USB
Bloom is a serial port to TCP/IP socket redirector. It listens for incoming connections on a user
- port: 20000
- pollingFreq: 50
- baudRate: 9600
- waitTime: 1000
Step 4: Login to SensorMonkey and Publish Stream for Controlling Arduino private its stream live over the Internet.
After logging into SensorMonkey and opening my control panel, I'm going to add an entry for the Arduino named "My Webcam". By clicking on the newly added entry, I can configure the connection parameters; namely, the IP address and port number where the device can be found.
Recall from Step 3 that I am using Bloom (or SensorMonkeySerialNet) to map the Arduino's serial port to TCP/IP port 20000 on my local machine. So, I enter a port number of 20000 and an IP address of 127.0.0.1 (the local loopback address). I'm not reading any data from the Arduino, so I can use the default format description file provided by the control panel.
After clicking 'Connect', I navigate to the 'Control' tab where I can test my pan and tilt assembly by sending commands to the Arduino. By prefixing the commands with a # symbol, SensorMonkey will interpret the text as hexadecimal character pairs (i.e. binary octets). So, for example, I can instruct the pan and tilt assembly to assume a rotation of 180 degrees on both axes by typing #FF into the text field and pressing return on my keyboard (or clicking the 'Send Text' button). Try out the following combinations to test your pan and tilt assembly (be careful not to exceed the practical rotation range of your servo motors):
#08 : Pan 0 degrees, Tilt 96 degrees
#0F : Pan 0 degrees, Tilt 180 degrees
#FF : Pan 180 degrees, Tilt 180 degrees
#F8 : Pan 180 degrees, Tilt 96 degrees
Finally, after testing my pan and tilt assembly, I navigate to the 'Stream' tab where I can publish the stream for controlling the Arduino live over the Internet. I'm required to select at least one variable when streaming (even if I don't actually use it) so I select the default variable ('Unsigned 8-bit Variable'), choose a stream type of 'Private', and click 'Publish'. The stream must be made private so as to allow remote clients to write to it.
In Step 6, I will write a simple HTML webpage to connect to my namespace, subscribe to my stream, and allow me to send commands to the Arduino to control the pan and tilt assembly using interactive sliders.
Step 5: Login to Justin.tv and Publish Audio/Video Stream From Webcam
Justin.tv is a free online service for broadcasting live audio/video streams. After creating an account, I can publish my live webcam feed by clicking the 'Go Live!' button in the top-right corner of the page. I allow access to my webcam when asked and click the 'Start' button to begin broadcasting. Once my channel is live, I can embed the output in a webpage using some simple HTML code (see the next step).
Important! If you have more than one webcam make sure to select the one that is mounted to the pan and tilt assembly. You can do this through the channel options when broadcasting your stream.
Step 6: Remotely Control Webcam Using JQuery UI
In the final part of this tutorial, I'm going to combine the live streams from SensorMonkey and Justin.tv to create a simple webpage that can be used to remotely control my webcam. I have downloaded the latest':
(Important! You must replace YOUR_NAMESPACE and YOUR_PRIVATE_KEY in the code below with those assigned to you when you login to SensorMonkey. You'll also need to replace YOUR_CHANNEL with the name of your Justin.tv channel)
--------------------------------------------------------------------------------
<!DOCTYPE html>
<html>
<head>
<title>Remote controlled webcam using Arduino, SensorMonkey, jQuery and Justin.tv</title>
<link rel="stylesheet" type="text/css" href="jquery-ui-1.8.21.custom.css" />
<style type="text/css">
body {
padding: 10px;
}
#container {
margin-bottom: 20px;
}
#webcam {
float: left;
height: 240px;
margin-right: 20px;
width: 320px;
}
#tilt-slider {
float: left;
height: 240px;
margin-right: 10px;
}
#tilt-display {
height: 240px;
line-height: 240px;
}
#pan-slider {
margin-bottom: 10px;
width: 320px;
}
#pan-display {
text-align: center;
width: 320px;
}
.rotation {
color: #F6931F;
font-weight: bold;
}
</style>
<script type="text/javascript" src="jquery-1.7.2.min.js"></script>
<script type="text/javascript" src="jquery-ui-1.8.21.custom.min.js"></script>
<script type="text/javascript" src=""></script>
<script type="text/javascript" src=""></script>
</head>
<body>
<div id="container">
<div id="webcam">
<object type="application/x-shockwave-flash" data="" id="live_embed_player_flash" height="240" width="320" bgcolor="#000000">
<param name="allowFullScreen" value="true" />
<param name="allowScriptAccess" value="always" />
<param name="allowNetworking" value="all" />
<param name="movie" value="" />
<param name="flashvars" value="hostname=" />
</object>
</div>
<div id="tilt-slider"></div>
<div id="tilt-display">Tilt: <span class="rotation">96</span></div>
</div>
<div id="pan-slider"></div>
<div id="pan-display">Pan: <span class="rotation">96</span></div>
<script type="text/javascript">
// Converts an integer (or a string representation of one) to a hexadecimal character (0-9A-F).
function toHex( i ) {
return parseInt( i ).toString( 16 ).toUpperCase();
}
$( function() {
// Create tilt slider.
$( "#tilt-slider" ).slider( {
range : "min",
orientation : "vertical",
value : 8,
min : 0,
max : 15,
step : 1,
slide : function( event, ui ) {
// Update UI.
$( "#tilt( $( "#pan-slider" ).slider( "value" ) );
var tilt = toHex( ui.value );
client.deliverToStreamPublisher( "/private/My Webcam", "#" + pan + tilt );
}
} );
// Create pan slider.
$( "#pan-slider" ).slider( {
range : "min",
value : 8,
min : 0,
max : 15,
step : 1,
slide : function( event, ui ) {
// Update UI.
$( "#pan( ui.value );
var tilt = toHex( $( "#tilt-slider" ).slider( "value" ) );
client.deliverToStreamPublisher( "/private/My Webcam", "#" + pan + tilt );
}
} );
// 1. Connect to SensorMonkey
// 2. Join namespace
// 3. Subscribe to stream
var client = new SensorMonkey.Client( "" );
client.on( "connect", function() {
client.joinNamespace( "YOUR_NAMESPACE", "YOUR_PRIVATE_KEY", function( e ) {
if( e ) {
alert( "Failed to join namespace: " + e );
return;
}
client.subscribeToStream( "/private/My Webcam", function( e ) {
if( e ) {
alert( "Failed to subscribe to stream: " + e );
return;
}
} );
} );
client.on( "disconnect", function() {
alert( "Client has been disconnected!" );
} );
} );
} );
</script>
</body>
</html>
--------------------------------------------------------------------------------
The code inside of the <object></object> tags is used to embed the live Justin.tv stream into the webpage. You'll need to replace each instance of YOUR_CHANNEL in the code between these tags with the name of your channel. The stream won't display on iOS devices (iPhone, iPad etc.) but will work on Android smartphones and tablets, as shown in the photos.
I use jQuery UI to create a horizontal pan slider and a vertical tilt slider to control the orientation of the webcam. When one of the sliders is moved to a new position, the code calculates a combined pan/tilt rotation, encodes it as a hexadecimal character pair (as described in Step 2 and Step 4) and sends it through SensorMonkey to the Arduino controlling the servo motors.
Finally, the workflow for connecting to SensorMonkey is very simple (don't forget to replace YOUR_NAMESPACE and YOUR_PRIVATE_KEY in the code above):
- Import client
- Connect to SensorMonkey
- Join namespace
Once subscribed, I can simply call client.deliverToStreamPublisher() to send data directly to the Arduino through the SensorMonkey service.
That's it! I can now remotely control my webcam in real-time using a combination of Arduino, SensorMonkey, jQuery and Justin.tv. I can upload the webpage to a public webserver and access it from anywhere on any device with a HTML5/Flash compatible web-browser. See the next step for suggestions on improving the implementation described so far.
Step 7: Improvements and Enhancements
- Touchscreen UI: The standard jQuery UI library is not optimized for use on touchscreen devices. As such, the sliders can be a little unwieldy and difficult to manipulate properly on mobile devices. jQuery Mobile can be used instead to provide more intuitive and easy to use UI controls across all popular mobile device platforms.
- Synchronization of Multiple Remote Clients: Currently, the pan and tilt sliders are not synchronized among multiple remote clients. In other words, when one client moves the webcam the change is not reflected in the sliders of the other clients. One way to accomplish this would be to have the Arduino broadcast the current rotations of the servos whenever they are updated. You could then listen for 'publish' and 'bulkPublish' events in the JavaScript code and synchronize the sliders each time an update is received.
- High Resolution Video Encoding: The default encoder used by Justin.tv is not very good. To improve the quality, you can use Wirecast or Flash Media Encoder to produce a higher resolution stream that can be broadcast through Justin.tv instead.
- Alternative/Custom Video Streaming Services: If Justin.tv is not to your liking, there are other free alternatives; Livestream, Ustream.tv and Bambuser to name three of the more popular ones. If you're feeling adventurous, you can setup your own instance of Wowza Media Server on Amazon EC2 to stream your live audio/video feeds.
- Mouse Control: Instead of sliders, you can use the position of the mouse to control the orientation of the webcam by mapping the co-ordinates of the onscreen cursor to pan and tilt rotations. You can then encode the rotation angles as hexadecimal character pairs before sending them to the Arduino through SensorMonkey.
- Remote Control of Additional Actuators: As well as sending commands to control the servos mounted in the pan and tilt assembly, you can send commands to control additional actuators wired to the Arduino. Just define your commands and have the Arduino's firmware parse the incoming bytes from the serial port to identify the types and arguments. You can provide buttons on your webpage to turn LEDs and switches on/off or custom control elements for driving other servos.
- Fine-grained Motion Control: Instead of quantizing the pan and tilt rotations to fit together in a single byte, you could encode them separately and allow the full range of servo motion to be controlled using the sliders (i.e. 0 to 180 degrees in single degree increments).
Be the First to Share
Recommendations
35 Discussions
3 years ago
Hi, my first step in my graduate project is to recover images from the webcam and display them on my pc how can I do?
3 years ago
thanks
3 years ago
How you have interface your camera ?? With PC or Arduino directly??
3 years ago
This look good, but those links are down
4 years ago on Introduction
cool project..
hi i have made similar project. but i use human detecting by PIR sensor and servo to control movement webcam if any human detected. i want to add image processing, so if any human movement, PIR sensor detected and webcam automatically captured image and save in PC. could you help me to explain more? how to integrate between PIR sensor and automatically captured in webcam?
appreciate for your replying..
Reply 4 years ago
Olá, você poderia me indicar algum tutorial?
miragempro@hotmail.com
Obrigado
5 years ago
Hey,
5 years ago on Introduction
6 years ago on Introduction
Awesome project, this is exactly what I need. Thanks for sharing :)
Reply 6 years ago on Introduction
No problem, glad you liked it!!
6 years ago
This looks hard
7 years ago on Introduction
PROBLEM SOLVED >> i can now control it 100% ,,, the problem was that in the namespace i only put the number ( i.e 520 ) ,, but in fact i must put the whole name ( i.e hello 's namespace 520 ) ,,, thxxxxxxx a lot for this project ,, it's really brilliant :)
Reply 7 years ago on Introduction
Hi Natasha,
Good to hear you were finally able to get it working! I should perhaps have made it clearer that you need to enter the entire namespace rather than just the number.
I trust everything is working OK for you now??
Reply 7 years ago on Introduction
yes ,, everything works .. thank you very much .. really appreciated :)
7 years ago on Introduction
i solved the problem of " uploading on a website" i used google hosted libraries and now i have the interface in a website ( from wix.com ) ,, but still sliders dont interact ,, i connect everything well ,, but when i change the position of the slider nothing happens -_-' ,, any suggestions ??
7 years ago on Introduction
i can control the motors from sensormonkey website but when i try from my local server (appserv) it doesnt' work ,, sliders dont interact with the motors ,,
7 years ago on Introduction
hi again :) .. one more question please to make this project perfect ... i uploaded the html file into a website but i didnt' get the sliders because i have to upload the jquery files also ,, but i cant' find a website to upload the files and the html code ,, do u have any suggestions ?? and thanks in advance
Reply 7 years ago on Introduction
Hi,!
7 years ago on Introduction
did u use one battery ?? or 3 ??
Reply 7 years ago on Introduction
Only one battery! I just included 3 in the picture (when one ran out, I swapped it for another charged one).. | https://www.instructables.com/id/Remote-controlled-webcam-using-Arduino-SensorMonk/ | CC-MAIN-2020-24 | refinedweb | 3,117 | 60.55 |
In my rails app I can create a client and a worker for that client. Currently, I'm able to list all the workers that have been created. However, this list includes the workers for other clients too.
How do I only list all the workers for a specific client in the client show page?
I have a feeling it has to do with my client#show method.
class ClientsController < ApplicationController
before_action :set_client, only: [:show, :edit, :update, :destroy]
# GET /clients
# GET /clients.json
def index
@clients = Client.all
end
# GET /clients/1
# GET /clients/1.json
def show
@workers = Worker.all
end
# views/clients/show.html.erb
<ul>
<% @workers.each do |worker| %>
<li><%= worker.first_name %> <%= worker.client.business_name %></li>
<% end %>
</ul>
In your
ClientsController, your
show action should be showing a
Client, but your code is fetching from
Worker. You want something more like this:
def show @client = Client.includes(:workers).find(params[:id]) end # app/views/clients/show.html.erb <% @client.workers.each do |worker| %> <%= worker.name %> <% end %>
Note: it looks like you also have a
set_client method that is retriving the current
Client and setting the
@client instance variable. This is something of an antipattern; in many cases you retrieve data you don't need and make your actions harder to understand at a glance. A small amount of non-logic duplication is better than confusing abstractions! | https://codedump.io/share/Bp72fi0xUJcL/1/rails-list-all-of-parent39s-children-in-parent39s-show | CC-MAIN-2016-50 | refinedweb | 229 | 69.38 |
Build a Twitter Like Search Feed with React.js and appbase.io
A step-by-step tutorial on building a live Twitter like search feed with React.JS and appbase.io.
Getting Started with React and Redux
Use Redux to get predictable state in our applications.
This is Part — II of a series on building Twitter like live search feeds with different Javascript frameworks. You can check out the Part-I published on scotch.io where we used jQuery.
Ever tried Twitter Live Search and wondered how it works? On the surface, you search for a #hashtag or a keyword and Twitter shows you a live feed of results, with new tweets appearing continuously after the initial search results were rendered completely!
Image: Live twitter search for #GameOfThrones
How is Twitter able to return a dynamic result feed for any searched keyword? One way to implement a Twitter Search style live feed is to return the original search results from a database (SQL, ElasticSearch, Mongo etc.) and then have a separate feed component using a realtime channel (think socket.io).
Image: Architecture for a naive realtime feed
We can’t know for sure how Twitter internally implements the realtime feed, but the approach described above might be easy to get started but requires scaling two different system components, the database and the feed logic. It can also suffer from data consistency issues because of constant syncing that is required by this arranged. Performance wise, it would do okay with a O(MxN) time complexity (where M=data insertions into the DB per batch, N=channels with at least one active subscriber).
In this post, we will describe a scalable approach to building realtime feeds using Meetup’s RSVPs as the data source. We will store this data in appbase.io, which acts as a database (thanks to Elasticsearch) with a realtime feeds layer. For the backend, we will use a Node.JS worker to fetch Meetup’s RSVPs and insert them into an appbase.io app. On the frontend, we will use React.JS to create the feed UI and query appbase.io in realtime with different keyword filters.
Enter meetupblast!
Meetupblast shows a live feed of meetup RSVPs searchable by cities and topics. Like Twitter Live Search, it returns a dynamic feed of RSVP data by the selected cities and topics. It’s a great way to find cool meetups other people are attending in your city.
()
You can try the live demo and access the code on github here or continue reading to see how we built this.
Key Ingredients
The meetupblast recipe can be broken into two key pieces: the backend worker and the user interface.
- Backend Worker
- We use Meetup’s RSVP stream endpoint to get realtime RSVPs.
- We then store this data in appbase.io, which provides us a convenient way to query both historical data and realtime feeds — we call it the streaming DB ;)
- User Interface
- Querying appbase.io for Meetup RSVP feeds by cities and topics.
- The UI / UX logic is entirely written in a React.JS frontend. And we use typeahead for displaying the search results, it’s a convenience library for building search interfaces from Twitter.
Deep Dive
Now that we know what meetupblast does, let’s get to the thick of how the app works.
Backend Worker
Our backend worker is a simple Node.JS code that keeps running forever on a DigitalOcean droplet.
The worker consumes meetup RSVP data from their APIs
http.get(meetup_url, function(res) { res.on('data', function(chunk) { // called on new RSVPs var data = JSON.parse(chunk); meetup_data.push(data); // capture RSVPs in an array }); });
and then index each RSVP into appbase.io.
appbaseRef.index({ type: DATA_TABLE, // collection to store the data into body: meetup_data[counter] }).on('data', function(res) { console.log("successfully indexed one meetup RSVP"); })
Meetup provides us a JSON object for every RSVP. We then write this data into appbase.io as soon as it arrives. appbase.io is built as a streaming layer on ElasticSearch, and provides a live query interface for streaming JSON results.
An RSVP JSON data looks like this:
"visibility": "public", "response": "yes", "guests": 0, "member": { "member_id": 185034988, "photo": "", "member_name": "Wilma" }, "rsvp_id": 1566804180, "mtime": 1440266230993, "event": { "event_name": "Wednesday Badminton @ Flying Dragon", "event_id": "224809211", "time": 1440630000000, "event_url": "" }, "group": { "group_topics": [ { "urlkey": "social", "topic_name": "Social" }, { "urlkey": "board-games", "topic_name": "Board Games" }, { "urlkey": "movie-nights", "topic_name": "Movie Nights" } ], "group_city": "Richmond Hill", "group_country": "ca", "group_id": 1676043, "group_name": "Richmond Hill Friends", "group_lon": -79.4, "group_urlname": "Richmond-Hill-Friends", "group_state": "ON", "group_lat": 43.84 }
User Interface
The User Interface is a small frontend that queries appbase.io for realtime meetups based on specific cities and tags and displays them in a neat list.
Image: Building the live feed interface with appbase.io, typeahead.js and react.js
The final app directory structure will look as follows:
The codebase can be accessed at the meetupblast github repo. Or you can follow the remaining tutorial to see how we build it step by step.
Step 0: Initial Project Setup
We use bower_components for managing library dependencies for bootstrap, appbase-js and typeahead.js. If you haven’t used it before, it’s a neat package manager for the web.
bower init /* follow the interactive setup to create bower.json */ bower install --save bootstrap bower install --save appbase-js bower install --save typeahead.js
We will use browserify and reactify, two node_modules along with gulp, a streaming build for transpiling all the React .jsx files we will soon be adding to our codebase into javascript.
npm init /* follow the interactive setup to create package.json */ npm install --save browserify npm install --save reactify npm install --save gulp (if you don't already have it)
If you haven’t used gulp before or would like to learn more about how the transpiling process works in greater depth, checkout this spot on tutorial by Tyler McGinnis on the topic.
Next, we will configure the gulpfile to read all our React files.
var browserify = require('browserify'); var gulp = require('gulp'); var source = require("vinyl-source-stream"); var reactify = require('reactify'); gulp.task('browserify', function() { var b = browserify({ entries: ['src/app.js'], debug: true }); b.transform(reactify); // use the reactify transform return b.bundle() .pipe(source('main.js')) .pipe(gulp.dest('./dist')); }); gulp.task('watch', function() { gulp.watch('src/*.js', ['browserify']); gulp.watch('src/*.jsx', ['browserify']); }); gulp.task('default', ['watch', 'browserify']);
A gulp file typically has one or more build tasks. We define the transpiling task browserify, which reactifies all the .jsx files starting from src/app.js into a single dist/main.js file. We will run the gulp build in the next step. This is how the project tree should look at this point — files at step 0.
Step 1: Initializing Project Files
Next, we will initialize the project files, assets folder and start writing the index.html.
touch index.html mkdir assets && touch assets/style.css mkdir src && cd src touch app.js request.js helper.js touch container.jsx filterContainer.jsx tag.jsx user.jsx userImg.jsx
We recommend taking the stylesheet file style.css and paste it as is into assets/style.css file.
Next, we will use our project setup files and call them from index.html.
<!DOCTYPE html> <html> <head> <title>Meetup</title> <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="assets/style.css" /> <script src="bower_components/jquery/dist/jquery.min.js"></script> <script src="bower_components/appbase-js/browser/appbase.js"></script> <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> <script type="text/javascript" src="bower_components/typeahead.js/dist/typeahead.bundle.js"></script> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body class="container"> <a href=""><img style="position: absolute; top: 0; right: 0; border: 0;z-index:15" src="" alt="Fork me on GitHub" data-</a> <div id="container"></div> <script type="text/javascript" src="src/helper.js"></script> <script type="text/javascript" src="src/request.js"></script> <script type="text/javascript"> var REQUEST = new meetup_request(null); </script> <script src="dist/main.js"></script> </body> </html>
Now that we have initialized all the project files, let’s transpile our non-existent .jsx files and run the app.
gulp browserify /* If you see any missing node modules like 'vinyl-source-stream', add them with an npm install command */ npm install --save vinyl-source-stream
You should now see a dist/ folder generated in the project root with a file called main.js (as expected from our gulp build process).
Let’s run the app at this point using
python -m SimpleHTTPServer 3000
Image: Step 1, a blank screen page created using our project setup
Your project tree at this point should look like this — project tree at step 1.
Step 2: Writing UI Components with React
Our project is setup with all the necessary files, we have a good handle on the React transpiling process using gulp. Now’s the time to dive into the src/ codebase.
var React = require('react'); var ReactDOM = require('react-dom'); var Container = require('./container.jsx'); ReactDOM.render( <div> <Container></Container> </div> , document.getElementById('container'));
app.js is the entry point. In case you missed it, we used this file path in our gulp browserify build process too.
helper.js is a helper file, we recommend you get it from here and paste it as is.
It requires container.jsx file, which we declare inside the app.js file.
Container.jsx is where we define the main app container. It contains:
- filterContainer.jsx contains the city and topic filter components. It fires the RSVP feed based on current cities and topics selected.
a. tag.jsx defines the actual filter component, and the checkboxing / removing methods.
- user.jsx displays the single user's RSVP feed UI.
a. userImg.jsx displays the default image if user's pic can't be found.
Before delving into the React components, here’s a small primer on them. React components are the defining units of code organization (like classes in Java) in React. A component can inherit another component, have child components and can be published on the interwebs. A component is always defined with a React.createClass({specObject}) invocation, where the specObject should have a mandatory render() method. It can also contain other methods as well. The official docs are a good reference to whet your appetite about a component’s spec and lifecycle.
If you have never created a React component based app before, we recommend checking out this status feed app tutorial.
Enough said, let’s take a look at out codebase.
var React = require('react'); var FilterContainer = require('./filterContainer.jsx'); var User = require('./User.jsx'); var Container = React.createClass({ getInitialState: function() { return { users: [], CITY_LIST: [], TOPIC_LIST: [] }; }, componentDidMount: function() { var $this = this; this.make_responsive(); $('.meetup-record-holder').on('scroll', function() { if ($(this).scrollTop() + $(this).innerHeight() >= this.scrollHeight) { var stream_on = REQUEST.PAGINATION($this.state.CITY_LIST, $this.state.TOPIC_LIST); stream_on.done(function(res) { $this.on_get_data(res, true); }).fail('error', function(err) {}); } }); }, make_responsive: function() { function size_set() { var window_height = $(window).height() - 15; $('.meetup-record-holder').css('height', window_height); }; size_set(); $(window).resize(function() { size_set(); }); }, on_get_data: function(res, append) { var $this = this; //responseStream.stop(); if (res.hasOwnProperty('hits')) { var record_array = res.hits.hits; if (append) { var arr = $this.state.users; var new_array = $.merge(arr, record_array); $this.setState({ users: new_array }); } else { record_array = record_array.reverse(); $this.setState({ users: record_array }); } } else { var arr = $this.state.users; arr.unshift(res); $this.setState({ users: arr }); } }, set_list: function(method, list) { if(method == 'city') { this.setState({ CITY_LIST: list }); } else { this.setState({ TOPIC_LIST: list }); } }, render: function() { var $this = this; return ( <div className="row meetup-container"> <FilterContainer key='1' on_get_data={this.on_get_data} CITY_LIST={this.state.CITY_LIST} TOPIC_LIST={this.state.TOPIC_LIST} set_list={this.set_list} > </FilterContainer> <div className="meetup-record-holder" id="meetup-record-holder"> <div className="container full_row" id="record-container"> {this.state.users.map(function(single_user1, i){ var single_user = single_user1._source; return ( <User key={i} index={i} name={single_user.member.member_name} img={single_user.member.photo} event_name={single_user.event.event_name} group_city={single_user.group.group_city} group_topics={single_user.group.group_topics} event_url={single_user.event.event_url} TOPIC_LIST={$this.state.TOPIC_LIST} ></User> ); })} </div> </div> </div> ); } }); module.exports = Container;
Container is the main component. It responsible for keeping the state of three variables:
- users — an array of RSVP feeds that are being streamed by appbase.io
- CITY_LIST — an array of current city selection for filtering which feeds need to be streamed
- TOPIC_LIST — an array of current topic selection for filtering feeds by topics.
If we wanted to include other UI elements to filter feeds by (for instance, dates of meetups), we would similarly use another variable keep it’s state in the container.
The container is divided into two sub-components:
- FilterContainer component which creates the UI widget and manages the interaction flow to update the cities and topic, and
- User component is responsible for displaying the individual feed element’s UI.
var React = require('react'); var Tag = require('./tag.jsx'); var FilterContainer = React.createClass({ componentWillMount: function() { this.fire_response(); }, fire_response: function() { var $this = this; streamingClient = REQUEST.GET_STREAMING_CLIENT(); var stream_on = REQUEST.FIRE_FILTER(this.props.CITY_LIST, this.props.TOPIC_LIST); stream_on.on('data', function(res) { $this.props.on_get_data(res); $this.stream_start(); }).on('error', function(err) {}); }, stream_start: function() { var $this = this; streamingClient = REQUEST.GET_STREAMING_CLIENT(); var stream_on = REQUEST.STREAM_START(this.props.CITY_LIST, this.props.TOPIC_LIST); stream_on.on('data', function(res) { $this.props.on_get_data(res, true); }).on('error', function(err) {}); }, set_list: function(method, list) { this.props.set_list(method, list); this.fire_response(); }, render: function() { return ( <div className="meetup-filter-container"> <Tag key="0" type="city" set_list={this.set_list} list={this.props.CITY_LIST} fire_response={this.fire_response}></Tag> <Tag key="1" type="topic" set_list={this.set_list} list={this.props.TOPIC_LIST} fire_response={this.fire_response}></Tag> </div> ) } }); module.exports = FilterContainer;
In the FilterContainer, we initialize the RSVP feed stream via the fire_response method. It contains the Tag component to reuse the UI elements for building the city and topic list views.
Image: City and Topic UI elements built with FilterContainer and Tag components
You can copy and paste the Tag component’s code from here.
Let’s take a look at the User component.
var React = require('react'); var UserImg = require('./userImg.jsx'); //User component var User = React.createClass({ HIGHLIGHT_TAGS: function(group_topics) { var highlight_tags = []; var group_topics = group_topics; var highlight = this.props.TOPIC_LIST; if (highlight.length) { for (i = 0; i < group_topics.length; i++) { for (var j = 0; j < highlight.length; j++) { if (highlight[j] == group_topics[i]) group_topics.splice(i, 1); } } for (i = 0; i < highlight.length; i++) { highlight_tags.push(highlight[i]); } } var lower = group_topics.length < 3 ? group_topics.length : 3; for (i = 0; i < lower; i++) { highlight_tags.push(group_topics[i]['topic_name']); } return highlight_tags; }, render: function() { var highlight_tags = this.HIGHLIGHT_TAGS(this.props.group_topics); return ( <a className="full_row single-record single_record_for_clone" href={this.props.event_url} <div className="img-container"> <UserImg key={this.props.event_url} src={this.props.img} /> </div> <div className="text-container full_row"> <div className="text-head text-overflow full_row"> <span className="text-head-info text-overflow"> {this.props.name} is going to {this.props.event_name} </span> <span className="text-head-city">{this.props.group_city}</span> </div> <div className="text-description text-overflow full_row"> <ul className="highlight_tags"> { highlight_tags.map(function(tag,i){ return (<li key={i}>{tag}</li>) }) } </ul> </div> </div> </a> ) } }); module.exports = User;
A lot of the code here is in making sure we give it the proper styling layout. By now, it should be clear how components in React. They are not very different from the abstractions offered by Object Oriented Programming languages like Java.
This component uses one sub-component called UserImg. It’s a very simple component that uses a default puppy image when a user’s image URL in the RSVP JSON doesn’t resolve.
var React = require('react'); var UserImg = React.createClass({ componentDidMount: function() { var self = this; this.img = new Image(); var defaultSrc = ''; this.img.onerror = function() { if (self.isMounted()) { self.setState({ src: defaultSrc }); } }; this.img.src = this.state.src; }, getInitialState: function() { return { src: this.props.src }; }, render: function() { return <img src={this.state.src} />; } }); module.exports = UserImg;
This is the entirety of our React code: We should have app.js, container.jsx, filterContainer.jsx, tag.jsx, user.jsx and userImg.jsx files.
We are missing one last important file before we get to see the working demo, request.js. We use the appbase-js lib here for defining live queries on the RSVP feed data. We recommend getting it as is from here to get to the functional demo.
Let’s run the app now.
gulp browserify python -m SimpleHTTPServer 3000
Image: Step 2, complete UI rendered with React
The project tree at this step should resemble the final project. You can also see the live demo at appbaseio-apps.github.io/meetupblast-react.
How Live Queries Work
Earlier, we skipped an important part of explaining how the live queries work. Let’s take a look at it here.
There are three important queries happening in the UI:
- Streaming RSVP feeds live filtered by the user selected cities and topics.
- Generating the list of cities and topics.
- Paginating to show historical data when user scrolls down.
Let’s see how to do the first one.
// connecting to the app using our unique credentials. We will use these since the backend worker is configured to update this app, but you can change this by creating your own app from appbase.io dashboard. var appbaseRef = new Appbase({ appname: "meetup2", url: "" }) // Now let's get the initial feed data var response = appbaseRef.search({ type: "meetup", size: 25, body: { query: { match_all: {} } } }) response.on("data", function(data) { console.log("25 results matching everything", data.hits.hits) // all initial data is returned in an array called hits.hits. You can browse the data object to see other interesting meta data like the time taken for the query. });
When you run the above snippet in the console, you should see an array of size 25. Now, this is great but how do we continuously stream new RSVP feeds as they are indexed in the backend. Here’s how:
// Let's subscribe to new data updates. searchStream() works exactly like search() except it only returns new data matching the query is indexed or modified. var streamResponse = appbaseRef.searchStream({ type: "meetup", body: { query: { match_all: {} } } }) streamResponse.on("data", function(sdata) { console.log("I am a new data", sdata._source) // all new data is returned as JSON data in streams })
Image: Browser console output when running the above script
We now know how live queries work with appbase.io APIs. The interesting part here is the JSON query object defined in the search() and searchStream() methods. We can change this query object to only showing RSVPs that match a white list of cities and/or topics. Anything that can be specified with Elasticsearch’s Query DSL can be queried live with appbase.io. This is exactly what request.js does. It also creates a list of top 1000 cities and topics at the start of the app based on aggregating all the RSVPs indexed in the last month. See. You can read more about appbase’s APIs here or check out a whole host of interesting live querying based apps at appbase.io’s community page.
Summary
This is Part-II of a N-Part series where we show how to build a Twitter like Live Search Feed using appbase.io and React.JS.
- We started out with understanding the feed datastructure,
- We then looked at the backend worker, where we fetched data from meetup’s streaming endpoint and indexed it into appbase.io,
- Finally, we looked at building the live feed user interface using React Components. We covered the gulp build system for transpiling React in step 0, linked everything together in step 1 and wrote React Components for the UI in step 2.
Live demo and final code repository links for further hacking. | https://scotch.io/tutorials/build-a-twitter-like-search-feed-with-react-js-and-appbase-io | CC-MAIN-2017-13 | refinedweb | 3,308 | 51.65 |
Python Programming - Price Of Travelling In a Boat
- 12th May, 2019
- 13:01 PM
TASK 1 – calculate the money taken in a day for one boat. The cost of hiring a boat is $20 for one hour or $12 for half an hour. When a boat is hired the payment is added to the money taken for the day. The running total of hours hired that day is updated and the time when the boat must be returned is stored. At the end of the day the money taken and the total hours hired is output. No boat can be hired before 10:00 or returned after 17:00.
Program -
import random
t= 1
Tmoney = [0,0,0,0,0,0,0,0,0,0]
ReTime = [0,0,0,0,0,0,0,0,0,0]
TrTime =[0,0,0,0,0,0,0,0,0,0]
runningT = 0
RunningH = 0
currentTime = 1111 #time for testing
while t == 1:
currentTime += 1
avail = 0
while currentTime > 1000 and currentTime < 1700:
boatnumber = 9 #boat number 0 to 9(10 boats)
currentTime += 1 #time increment
for i in range(0,10): # caluculating available boats
if ReTime[i] > 0 and ReTime[i] > currentTime: # boats number gets decremented if there is a hired boat
boatnumber -= 1
# hiretype is found using random function currently set to between 1-3
hiretype = random.randint(1,3)
if boatnumber >= 0 and (hiretype == 1 or hiretype ==
TASK 2 – find the next boat available. Extend TASK 1 to work for all 10 rowing boats. Use the data stored for each boat to find out how many boats are available for hire at the current time. If no boats are available show the earliest time that a boat will be available for hire. need program and pseudocode please. python IDLE 2.7.14
Program -
#if hire type is 1 or 2 the value of these variables are changed to efficiently enter the values into corresponding lists(arrays)
if hiretype == 1:
rTime = currentTime + 60
boatmoney = 20
Time = 60
elif hiretype == 2:
boatmoney = 12
Time = 30
rTime = currentTime + 30
Tmoney[boatnumber] = Tmoney[boatnumber] + boatmoney
TrTime[boatnumber] = TrTime[boatnumber]+ Time
ReTime[boatnumber] =rTime
#if no boats are left shows the earliest time a boat will be available
elif boatnumber < 0:
earliest = 9999
for i in range(0,10):
if ReTime[i] < earliest and ReTime[i] > 0:
earliest = ReTime[i]
print str(earliest) + "earliest", currentTime
highest = 0
mostused = 0
non = 0
dayTmoney = 0
daytotaltime = 0
for i in range(0,10):
dayTmoney += Tmoney[i]
daytotaltime += TrTime[i]
if TrTime[i] > highest:
highest = TrTime[i]
mostused = i + 1
if ReTime[i] == 0:
non += 1
print ("Total money earned on day:" + str(dayTmoney) + "\n"
"Total number of time boats hired:" + str(daytotaltime) +"\n"
"Most used Boat:" + " " + str(mostused) + "\n"
"number of boats not used:" + " " + str(non))
t = 0 | https://www.theprogrammingassignmenthelp.com/blog-details/python-programming-calculate-price | CC-MAIN-2019-51 | refinedweb | 470 | 58.96 |
Backport #5023
irb does not like window resizes
Description
- open irb
- type until line wraps to the next line
- increase the width of the terminal
- notice that the text of in the second line will not come up to the first line. -> there is no more useful display/navigation in long lines unless window is resized to exactly the same width. :(
My irb uses libreadline. (Readline module is visible and config in ~/.inputrc is applied.)
Other readline applications like bash do not have that problem.
irb in 1.8 works fine, too.
all versions of 1.9 (including head) have the described problem.
(I tested mainly on my ubuntu box.)
Related issues
Associated revisions
History
Updated by naruse (Yui NARUSE) about 8 years ago
- Status changed from Open to Assigned
- Assignee set to keiju (Keiju Ishitsuka)
- Target version set to 2.0.0
Updated by ralf.kistner@gmail.com (Ralf Kistner) over 7 years ago
This is solved by using Readline.set_screen_size(lines, columns) to the correct size, every time the size has changed.
The only reliable way I've found to get the terminal size (on Ubuntu) is with the 'ruby-terminfo' gem:
require 'terminfo' Readline.set_screen_size(TermInfo.screen_size[0], TermInfo.screen_size[1])
Calling the above two lines before each readline() solved the issue for me.
Instead of polling for the terminal size, we could instead trap the SIGWINCH signal:
require 'terminfo' Signal.trap('SIGWINCH', proc { Readline.set_screen_size(TermInfo.screen_size[0], TermInfo.screen_size[1]) })
Also see my answer on this stackoverflow post:
Updated by zzak (Zachary Scott) over 6 years ago
Using 1.9.3-p372 I was able to reproduce this issue, but unable to reproduce in 2.0.0-dev.
Updated by zzak (Zachary Scott) over 6 years ago
I think we just need to backport to 1.9.3 branch, but I'm not sure what to backport.
Updated by keiju (Keiju Ishitsuka) over 6 years ago
zzak (Zachary Scott) wrote:
I think we just need to backport to 1.9.3 branch, but I'm not sure what to backport.
I have identified a commit which solve this problem.
The commit is revision 36130.
I think the following is applied only:
--- ext/readline/readline.c (revision 38896)
+++ ext/readline/readline.c (working copy)
@@ -1679,9 +1679,7 @@
#ifdef HAVE_RL_CATCH_SIGNALS
rl_catch_signals = 0;
#endif
-#ifdef HAVE_RL_CATCH_SIGWINCH
- rl_catch_sigwinch = 0; -#endif + #ifdef HAVE_RL_CLEAR_SIGNALS rl_clear_signals(); #endif
Updated by zzak (Zachary Scott) over 6 years ago
Thank you Keiju-san!
Will you please open a backport request for usa?
Updated by keiju (Keiju Ishitsuka) over 6 years ago
- Assignee changed from keiju (Keiju Ishitsuka) to usa (Usaku NAKAMURA)
- Target version changed from 2.0.0 to 1.9.3
zzak (Zachary Scott) wrote:
Thank you Keiju-san!
Will you please open a backport request for usa?
OK.
Usa-san, yoroshiku.
Updated by usa (Usaku NAKAMURA) over 6 years ago
- Tracker changed from Bug to Backport
- Project changed from Ruby master to Backport193
- Target version deleted (
1.9.3)
Updated by usa (Usaku NAKAMURA) over 6 years ago
- Status changed from Assigned to Closed
- % Done changed from 0 to 100
This issue was solved with changeset r39377.
Michael, thank you for reporting this issue.
Your contribution to Ruby is greatly appreciated.
May Ruby be with you.
merge revision(s) 36130: [Backport #5023]
* ext/readline/readline.c (Init_readline): don't set 0 to rl_catch_signals and rl_catch_sigwinch. [Bug #5423]
Also available in: Atom PDF
merge revision(s) 36130: [Backport #5023]
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/branches/ruby_1_9_3@39377 b2dd03c8-39d4-4d8f-98ff-823fe69b080e | https://bugs.ruby-lang.org/issues/5023 | CC-MAIN-2019-39 | refinedweb | 594 | 59.3 |
Re: "delete" causes prog to crash!
- From: "Robby" <Robby@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Tue, 9 Aug 2005 12:08:08 -0700
Hi Bob!
OKAY, I now understand... So if we are to use the delete statement, we must
use it to a pointer that points to the heap.
However if I add in the line:
> pTempVal = alarmCode;
I get the following error:
c:\DATASOURCE_EXTRA\Programming\VC++CodeTesting\Robert2_CodeTesting_HeapPointers_MSQ\Robert2_CodeTesting_HeapPointers.cpp(41):
error C2440: '=' : cannot convert from 'int' to 'int *'
and isn't a pointer supposed to hold the address of some data location. In
the line of code ( pTempVal = alarmCode;) we are trying to assign the value
of a variable to a pointer????
Why would we even think of doing this?
Robert
"Bob Milton" wrote:
> Robby,
> The problem is the statement:
>
> pTempVal = &alarmCode;
>
> This resets the address in pTempVal to the address of a stack variable.
> These cannot be deleted! If you just do
>
> pTempVal = alarmCode;
>
> Then your heap variable will contain the same value as the stack variable,
> and can be deleted.
>
> Bob Milton
>
> "Robby" <Robby@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
> news:59EDA6D8-CA8A-4ECE-98A2-4018F935C984@xxxxxxxxxxxxxxxx
> > Hi Victor,
> >
> > I just used one "new" statement in my code at define time, and so why the
> > crash! here is the whole code:
> >
> > #include <iostream>
> > using namespace std;
> >
> > class codes
> > {
> > public:
> > codes();
> > ~codes();
> > void progCode(int n1);
> > int returnCode() const;
> >
> > private:
> > int theCode;
> > };
> >
> > codes::codes():
> > theCode(9999)
> > {}
> >
> > codes::~codes()
> > {}
> >
> > void codes::progCode(int n1)
> > {
> > theCode = n1;
> > }
> >
> > int codes::returnCode() const
> > {
> > return theCode;
> > }
> >
> > int main()
> > {
> > int alarmCode;
> > int * pTempVal = new int;
> >
> > codes myCodes;
> > alarmCode = myCodes.returnCode();
> > pTempVal = &alarmCode;
> > cout << "Current alarm code is:" << alarmCode << "\n" << "******:" <<
> > *pTempVal ;
> > cin >> alarmCode;
> > myCodes.progCode(alarmCode);
> > alarmCode = myCodes.returnCode();
> > cout << "Current alarm code is now:" << alarmCode << "\n"
> > << "******:" << *pTempVal ;
> > cin >> alarmCode;
> > delete pTempVal;
> > return 0;
> > }
> >
> > --
> > Best regards
> > Robert
> >
> >
> > "Victor Bazarov" wrote:
> >
> >> Robby wrote:
> >> > I have declared a pointer on the heap (freeStoreMemory) as:
> >>
> >> No, you have declared a pointer and initialised it with the address of
> >> an int from the heap.
> >>
> >> > int * pTempVal = new int;
> >> >
> >> > I assign the pointer an address,
> >>
> >> Where? How? Why? It already has a value -- the address of an integer
> >> you allocated on the heap.
> >>
> >> > then I dereference it and all is fine!
> >> > However, in the book I am reading, they strongly suggest to delete your
> >> > pointer so you free up the memory.
> >> >
> >> > however when the program gets to the following line, the program
> >> > crashes!
> >> >
> >> > delete pTempVal;
> >> >
> >> > Here is the error:
> >> >
> >> > Debug assertion failed.....
> >> > _BLOCK_TYPE_IS_VALID(phead->nBlockUse)....
> >> > See C++ documentation....
> >> >
> >> > Why does it react this way?
> >>
> >> Because if you assign another value to the pointer you first obtained
> >> from 'new', you (a) lose the previous pointer and (b) probably attempt
> >> to 'delete' what was never allocated using 'new' in the first place.
> >>
> >> > And the program compiles without errors or
> >> > warnings!
> >>
> >> Undefined behaviour (like freeing memory you didn't obtain from 'new')
> >> never shows up at compile-time.
> >>
> >> V
> >>
>
>
>
.
- Follow-Ups:
- Re: "delete" causes prog to crash!
- From: Severian
- References:
- "delete" causes prog to crash!
- From: Robby
- Re: "delete" causes prog to crash!
- From: Victor Bazarov
- Re: "delete" causes prog to crash!
- From: Robby
- Re: "delete" causes prog to crash!
- From: Bob Milton
- Prev by Date: Re: "delete" causes prog to crash!
- Next by Date: Re: "delete" causes prog to crash!
- Previous by thread: Re: "delete" causes prog to crash!
- Next by thread: Re: "delete" causes prog to crash!
- Index(es): | http://www.tech-archive.net/Archive/VC/microsoft.public.vc.language/2005-08/msg00310.html | crawl-002 | refinedweb | 564 | 65.01 |
On 13.07.2017 17:36, Max Reitz wrote:
On 2017-07-13 10:41, Kevin Wolf wrote:Am 12.07.2017 um 18:58 hat Max Reitz geschrieben:On 2017-07-12 16:52, Kevin Wolf wrote:Am 12.07.2017 um 13:46 hat Pavel Butsykin geschrieben:This patch add shrinking of the image file for qcow2. As a result, this allows us to reduce the virtual image size and free up space on the disk without copying the image. Image can be fragmented and shrink is done by punching holes in the image file. Signed-off-by: Pavel Butsykin <address@hidden> Reviewed-by: Max Reitz <address@hidden> --- block/qcow2-cluster.c | 40 ++++++++++++++++++ block/qcow2-refcount.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++++ block/qcow2.c | 43 +++++++++++++++---- block/qcow2.h | 14 +++++++ qapi/block-core.json | 3 +- 5 files changed, 200 insertions(+), 10 deletions(-) diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c index f06c08f64c..518429c64b 100644 --- a/block/qcow2-cluster.c +++ b/block/qcow2-cluster.c @@ -32,6 +32,46 @@ #include "qemu/bswap.h" #include "trace.h"+int qcow2_shrink_l1_table(BlockDriverState *bs, uint64_t exact_size)+{ + BDRVQcow2State *s = bs->opaque; + int new_l1_size, i, ret; + + if (exact_size >= s->l1_size) { + return 0; + } + + new_l1_size = exact_size; + +#ifdef DEBUG_ALLOC2 + fprintf(stderr, "shrink l1_table from %d to %d\n", s->l1_size, new_l1_size); +#endif + + BLKDBG_EVENT(bs->file, BLKDBG_L1_SHRINK_WRITE_TABLE); + ret = bdrv_pwrite_zeroes(bs->file, s->l1_table_offset + + sizeof(uint64_t) * new_l1_size, + (s->l1_size - new_l1_size) * sizeof(uint64_t), 0); + if (ret < 0) { + return ret; + } + + ret = bdrv_flush(bs->file->bs); + if (ret < 0) { + return ret; + }If we have an error here (or after a partial bdrv_pwrite_zeroes()), we have entries zeroed out on disk, but in memory we still have the original L1 table. Should the in-memory L1 table be zeroed first? Then we can't accidentally reuse stale entries, but would have to allocate new ones, which would get on-disk state and in-memory state back in sync again.Yes, I thought of the same. But this implies that the allocation is able to modify the L1 table, and I find that unlikely if that bdrv_flush() failed already... So I concluded not to have an opinion on which order is better.Well, let me ask the other way round: Is there any disadvantage in first zeroing the in-memory table and then writing to the image?I was informed that the code would be harder to write. :-)If I have a choice between "always safe" and "not completely safe, but I think it's unlikely to happen", especially in image formats, then I will certainly take the "always safe".+ BLKDBG_EVENT(bs->file, BLKDBG_L1_SHRINK_FREE_L2_CLUSTERS); + for (i = s->l1_size - 1; i > new_l1_size - 1; i--) { + if ((s->l1_table[i] & L1E_OFFSET_MASK) == 0) { + continue; + } + qcow2_free_clusters(bs, s->l1_table[i] & L1E_OFFSET_MASK, + s->cluster_size, QCOW2_DISCARD_ALWAYS); + s->l1_table[i] = 0; + } + return 0; +} + int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size, bool exact_size) {I haven't checked qcow2_shrink_reftable() for similar kinds of problems, I hope Max has.Well, it's exactly the same there.Ok, so I'll object to this code without really having looked at it.I won't object to your objection. O:-)
Kevin, Can you help me to reduce the number of patch-set versions? :) And look at the rest part of the series, thanks!
Max | https://lists.gnu.org/archive/html/qemu-block/2017-07/msg00974.html | CC-MAIN-2019-35 | refinedweb | 534 | 56.96 |
Difference between revisions of "SCons"
Revision as of 15:35, 21 November 2018
SCons (from Software Construction) is a superior alternative to the classic make utility.
SCons is implemented as a Python script, its "configuration files" (SConstruct files) are also Python scripts. Madagascar uses SCons to compile software, to manage data processing flowing, and to assemble reproducible documents.
Contents
- 1 Useful SCons options
- 2 Compilation
- 3 Data processing flows with rsf.proj
- 4 Seismic Unix data processing flows with rsf.suproj
- 5 Document creation with rsf.tex
- 6 Book and report creation with rsf.book
- 7 Errors and Debugging
Useful SCons options
- scons -h (help) displays a help message.
- scons -Q (quiet) suppresses progress messages.
- scons -n (no exec) outputs the commands required for building the specified target (or the default targets if no target is specified) without actually executing them. It can be used to generate a shell script out of SConstruct script, as follows:
scons -nQ [target] > script.sh
Compilation
SCons was designed primarily for compiling software code. An SConstruct file for compilation may look like
env = Environment() env.Append(CPPFLAGS=['-Wall','-g']) env.Program('hello',['hello.c', 'main.c'])
and produce something like
bash$ scons -Q gcc -o hello.o -c -Wall -g hello.c gcc -o main.o -c -Wall -g main.c gcc -o hello hello.o main.o
to compile the hello program from the source files hello.c and main.c.
Madagascar uses SCons to compile its programs from the source. The more frequent usage, however, comes from adopting SCons to manage data processing flows.
Data processing flows with rsf.proj
The rsf.proj module provides SCons rules for Madagascar data processing workflows. An example SConstruct file is shown below and can be found in bei/sg/denmark
from rsf.proj import * Fetch('wz.35.H','wz') Flow('wind','wz.35.H','dd form=native | window n1=400 j1=2 | smooth rect1=3') Plot('wind','pow pow1=2 | grey') Flow('mute','wind','mutter v0=0.31 half=n') Plot('mute','pow pow1=2 | grey') Result('denmark','wind mute','SideBySideAniso') End()
Note that SConstruct by itself does not do any job other than setting rules for building different targets. The targets get built when one executes scons on the command line. Running scons produces
bash$ scons scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... retrieve(["wz.35.H"], []) < wz.35.H /RSF/bin/sfdd form=native | /RSF/bin/sfwindow n1=400 j1=2 | /RSF/bin/sfsmooth rect1=3 > wind.rsf < wind.rsf /RSF/bin/sfpow pow1=2 | /RSF/bin/sfgrey > wind.vpl < wind.rsf /RSF/bin/sfmutter v0=0.31 half=n > mute.rsf < mute.rsf /RSF/bin/sfpow pow1=2 | /RSF/bin/sfgrey > mute.vpl /RSF/bin/vppen yscale=2 vpstyle=n gridnum=2,1 wind.vpl mute.vpl > Fig/denmark.vpl scons: done building targets.
Obviously, one could also run similar commands with a shell script. What makes SCons convenient is the way it behaves when we make changes in the input files or in the script. Let us change, for example, the mute velocity parameter in the second Flow command. You can do that with an editor or on the command line as
sed -i s/v0=0.31/v0=0.32/ SConstruct
Now let us run scons again
bash$ scons -Q < wind.rsf /RSF/bin/sfmutter v0=0.32 half=n > mute.rsf < mute.rsf /RSF/bin/sfpow pow1=2 | /home/fomels/RSF/bin/sfgrey > mute.vpl /RSF/bin/vppen yscale=2 vpstyle=n gridnum=2,1 wind.vpl mute.vpl > Fig/denmark.vpl
We can see that scons executes only the parts of the data processing flow that were affected by the change. By keeping track of dependencies, SCons makes it easier to modify existing workflows without the need to rerun everything after each change.
SConstruct commands
Fetch(<file[s]>,<directory>,[options])
defines a rule for downloading data files from the specified directory on an external data server (by default) or from another directory on disk. The optional parameters that control its behavior are summarized below.
In the example above, Fetch specifies the rule for getting the file wz.35.H: connect to the default data sever and download the file from the data/wz directory.
An example to Fetch with more parameters is:
Fetch('KAHU-3D-PR3177-FM.3D.Final_Migration.sgy', dir='newzealand/Taranaiki_Basin/KAHU-3D', server='', top='open.source.geoscience/open_data', usedatapath=1)
Flow(<target[s]>,<source[s]>,<command>,[options])
defines a rule for creating targets from sources by running the specified command through Unix shell. The optional parameters that control its behavior are summarized below.
In the example above, there are two Flow commands. The first one involves a Unix pipe in the command definition.
On the use of parallel computing options, see Parallel Computing.
Plot(<target>,[<source[s]>],<command>,[options])
is similar to Flow but generates a graphics file (Vplot file) instead of an RSF file. If the source file is not specified, it is assumed that the name of the output file (without the .vpl suffix) is the same as the name of the input file (without the .rsf suffix).
In the example above, there are two Plot commands.
Result(<target>,[<source[s]>],<command>,[options])
is similar to Plot, only the output graphics file is put not in the current directory but in a separate directory (./Fig by default). The output is intended for inclusion in papers and reports.
In the example above, Result defines a rule that combines the results of two Plot rules into one plot by arranging them side by side. The rules for combining different figures together (which apply to both Plot and Result commands) include:
- SideBySideAniso
- OverUnderAniso
- SideBySideIso
- OverUnderIso
- TwoRows
- TwoColumns
- Overlay
- Movie
End()
takes no arguments and signals the end of data processing rules. It provides the following targets, which operate on all previously specified Result figures:
- scons view displays the resuts on the screen.
- scons print sends the results to the printer (specified with PSPRINTER environmental variable).
- scons lock copies the results to a location inside the DATAPATH tree.
- scons test compares the previously "locked" results with the current results and aborts with an error in case of mismatch.
The default target is set to be the collection of all Result figures.
Command-line options
Running the example above with TIMER=y produces
bash$ scons -Q TIMER=y /usr/bin/time < wind.rsf /RSF/bin/sfmutter v0=0.32 half=n > mute.rsf 0.09user 0.03system 0:00.13elapsed 94%CPU (0avgtext+0avgdata 383744maxresident)k 0inputs+0outputs (1513major+0minor)pagefaults 0swaps /usr/bin/time < mute.rsf /RSF/bin/sfpow pow1=2 | /RSF/bin/sfgrey > mute.vpl 0.10user 0.00system 0:00.18elapsed 59%CPU (0avgtext+0avgdata 384256maxresident)k 0inputs+0outputs (1515major+0minor)pagefaults 0swaps /usr/bin/time /RSF/bin/vppen yscale=2 vpstyle=n gridnum=2,1 wind.vpl mute.vpl > Fig/denmark.vpl 0.06user 0.03system 0:00.06elapsed 135%CPU (0avgtext+0avgdata 444416maxresident)k 0inputs+0outputs (1739major+0minor)pagefaults 0swaps
In other words, every shell command is preceded by the Unix time utility to measure the CPU time of the process.
Running the example above with CHECKPAR=y, we will not see any difference. Suppose, however, that we made a typo in specifying one of the parameters, for example, by using v1= instead of v0= in the arguments to sfmutter.
bash$ sed -i s/v0=0.31/v1=0.31/ SConstruct bash$ scons -Q CHECKPAR=y No parameter "v1" in sfmutter Failed on "mutter v1=0.31 half=n"
The parameter error gets detected by scons before anything is executed.
Seismic Unix data processing flows with rsf.suproj
If you process data with Seismic Unix instead of Madagascar, you can still take advantage of SCons-based processing flows by using the rsf.suproj module. See book/rsf/su for examples.
Note that, with rsf.suproj, the scons command generates hard copies (eps files) of Result figures, while the scons view command displays figures on the screen using the corresponding X-window plotting commands.
Document creation with rsf.tex
SConstruct commands
- Paper
- End([options]) signals the end of book processing rules. It provides the following targets:
- scons pdf (equivalent to scons paper.pdf)
- scons wiki (equivalent to scons paper.wiki)
- scons read (equivalent to scons paper.read)
- scons print (equivalent to scons paper.print)
- scons html (equivalent to scons paper.html)
- scons install (equivalent to scons paper.install)
- scons figs (equivalent to scons paper.figs)
The default target is set to pdf.
Book and report creation with rsf.book
SConstruct commands
- Book
- Papers
- End([options]) signals the end of book processing rules. It provides the following targets:
- scons pdf
- scons read
- scons print
- scons html
- scons www
The default targret is set to be scons pdf.
Errors and Debugging
DBPageNotFoundError
The scons database contains the information required so that scons executes only the parts of the data processing flow that were affected by changes in data, programs, and flows. Sometimes the database become corrupted. For example, when I ran out of disk space my scons stopped leaving a corrupted database. After deleting some files to create enough disk space running scons quickly fails with the message:
scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... scons: *** [31_81_IM.JPG] DBPageNotFoundError : (-30986, 'DB_PAGE_NOTFOUND: Requested page not found') scons: building terminated because of errors.
I was working in the directory $RSFSRC/book/data/alaska/line31-81. Scons keeps the database in the file $DATAPATH/data/alaska/line31-81/.sconsign.dbhash. It's a little tricky to find this file since it is hidden file in a directory below $DATAPATH. Removing $DATAPATH/data/alaska/line31-81/.sconsign.dbhash fixed this problem. | http://www.ahay.org/wiki2015/index.php?title=SCons&diff=prev&oldid=3744 | CC-MAIN-2019-22 | refinedweb | 1,621 | 51.44 |
CPANDB::Author - CPANDB class for the author table
TO BE COMPLETED
# Returns 'CPANDB' my $namespace = CPANDB: 'author' print CPANDB::Author->table;
While you should not need the name of table for any simple operations, from time to time you may need it programatically. If you do need it, you can use the
table method to get the table name.
my $object = CPANDB::Author->load( $author );
CPANDB::Author object, or throws an exception if the object does not exist.
# Get all objects in list context my @list = CPANDB::Author->select; # Get a subset of objects in scalar context my $array_ref = CPANDB::Author->select( 'where author > ? order by author', 1000, );
The
select method executes a typical SQL
SELECT a list of CPANDB::Author objects when called in list context, or a reference to an
ARRAY of CPANDB::Author objects when called in scalar context.
Throws an exception on error, typically directly from the DBI layer.
CPANDB::Author->iterate( sub { print $_->author . " ( CPANDB::Author->select ) { print $_->author . "\n"; }
You can filter the list via SQL in the same way you can with
CPANDB::Author->iterate( 'order by ?', 'author', sub { print $_->author . "\n"; } );
You can also use it in raw form from the root namespace for better control. Using this form also allows for the use of arbitrarily complex queries, including joins. Instead of being objects, rows are provided as
ARRAY references when used in this form.
CPANDB->iterate( 'select name from author order by author', sub { print $_->[0] . "\n"; } );
# How many objects are in the table my $rows = CPANDB::Author->count; # How many objects my $small = CPANDB::Author->count( 'where author > ?', 1000, );
The
count method executes a
SELECT COUNT(*)->author ) { print "Object has been inserted\n"; } else { print "Object has not been inserted\n"; }
Returns true, or throws an exception on error.
REMAINING ACCESSORS TO BE COMPLETED
The author table was originally created with the following SQL command.
CREATE TABLE author ( author TEXT NOT NULL PRIMARY KEY, name TEXT NOT NULL )
CPANDB::Author is part of the CPANDB API.
See the documentation for CPANDB. | http://search.cpan.org/~adamk/CPANDB-0.18/lib/CPANDB/Author.pod | CC-MAIN-2017-04 | refinedweb | 345 | 54.73 |
I m made Desktop App in netbeans platform in java.now wwhen i run my app it will open by default size of netbeans platform configuration.but i want full screen mode when i run or startup my app. so where and how to do that in my app?
If you want to open the JFrame maximized by default in swing you can use
JFrame. setExtendedState(), illusrated below:
public class MyFrame extends JFrame{ public MyFrame() { // Other codes // Set the JFrame to maximize by default on opening setExtendedState(JFrame.MAXIMIZED_BOTH); // Rest of the program } }
Also remember that you should not have
JFrame.pack() or
JFrame.setMaximimumSize() or
JFrame.setSize() in the same menthod (here constructor). | https://codedump.io/share/BxrDafi9UT5m/1/how-to-set-full-screen-mode-of-my-app-which-is-made-in-netbeans-platform | CC-MAIN-2018-22 | refinedweb | 114 | 69.07 |
Name | Synopsis | Description | Attributes | See Also | Notes
#include <euc.h> #include <getwidth.h> void getwidth(eucwidth_t *ptr);
The getwidth() function reads the character class table for the current locale to get information on the supplementary codesets. getwidth() sets this information into the struct eucwidth_t. This struct is defined in <euc.h> and has the following members:
short int _eucw1,_eucw2,_eucw3; short int _scrw1,_scrw2,_scrw3; short int _pcw; char _multibyte;
Codeset width values for supplementary codesets 1, 2, and 3 are set in _eucw1, _eucw2, and _eucw3, respectively. Screen width values for supplementary codesets 1, 2, and 3 are set in _scrw1, _scrw2, and _scrw3, respectively.
The width of Extended Unix Code (EUC) Process Code is set in _pcw. The _multibyte entry is set to 1 if multibyte characters are used, and set to 0 if only single-byte characters are used.
See attributes(5) for descriptions of the following attributes:
euclen(3C), setlocale(3C), attributes(5)
The getwidth() function can be used safely in a multithreaded application, as long as setlocale(3C) is not being called to change the locale.
The getwidth() function will only work with EUC locales.
Name | Synopsis | Description | Attributes | See Also | Notes | http://docs.oracle.com/cd/E19082-01/819-2243/6n4i09940/index.html | CC-MAIN-2015-22 | refinedweb | 198 | 64.41 |
I.
The reason is that the eye needs resting points. Whitespace makes it
easier to divide a line of text into chunks and read it. As well as in
natural languages as in code. $foo{bar}[17]{baz} is one big blob, and
it's hard to divide it into its 4 chunks, specially when the subscripts
are a bit more complex. We don't have the tendency to chain words together
in English (unlike in for instance German). Why should we with code?
Makeshifts last the longest.. ;-).
-derby
rdfield
"Perl doesn't compile check the types of or even the number of arguments"
Am I missing something, or isn't that what prototypes are for?
.02
cLive ;-)
-- Mike
--
just,my${.02}
# prototypes only check arg format
# i.e. (scalar/array/hash/code/glob and number of args)
package Sample;
sub stringify () {
return ($_[0]->{data}) x $_[1]
}
package main;
sub print_string($) {
print $_[0]->stringify(1)
}
my $obj = bless ( {data=>"foo\n"}, 'Sample');
print_string($obj); # fine
print_string(5); # uh oh... basic numbers don't have a stringify me
+thod,
# yet the arg passed the prototype since 5 is a sc
+alar.
[download]
# methods don't check prototypes... at all.
sub print_string_bad($) {
print $_[0]->stringify() # Will pass the prototype, yet break since $_
+[1] will be
# undef in stringify
}
print_string_bad($obj);
[download]
Hence the reason for the general distaste for prototypes among the perl community. For most cases, they're pointless. They're only there so that you can call your own subs like perl-builtins.
sub foo ($$) {....}
my @bar = (1, 2);
foo @bar;
[download]
The same goes for returning values from subs, returning a list just causes hassle of typing out assignments for each value in the list..)
--
I'm not belgian but I play one on TV.
Excellent. +. | http://www.perlmonks.org/index.pl?node_id=215675 | CC-MAIN-2017-39 | refinedweb | 302 | 73.27 |
Instances of this class represent a distinct font, with a built-in renderer.
More...
#include <font.h>
List of all members.
Instances of this class represent a distinct font, with a built-in renderer.
Definition at line 52 of file font.h.
[inline]
Definition at line 54 of file font.h.
[inline, virtual]
Definition at line 55 of file font.h.
[pure virtual]
Draw a character at a specific point on a surface.
Note that the point describes the top left edge point where to draw the character. This can be different from top left edge point of the character's bounding box! For example, TTF fonts sometimes move characters like 't' one (or more) pixels to the left to create better visual results. To query the actual bounding box of a character use getBoundingBox.
The Font implemenation should take care of not drawing outside of the specified surface.
Implemented in Graphics::BdfFont, and Graphics::WinFont.
Definition at line 290 of file font.cpp.
kTextAlignLeft
0
true
Definition at line 298 of file font.cpp.
Definition at line 303 of file font.cpp.
Definition at line 307 of file font.cpp.
Definition at line 314 of file font.cpp.
[virtual]
Calculate the bounding box of a character.
It is assumed that the character shall be drawn at position (0, 0).
The idea here is that the character might be drawn outside the rect (0, 0) to (getCharWidth(chr), getFontHeight()) for some fonts. This is common among TTF fonts.
The default implementation simply returns the rect with a width of getCharWidth(chr) and height of getFontHeight().
Definition at line 35 of file font.cpp.
false
Return the bounding box of a string drawn with drawString.
Definition at line 248 of file font.cpp.
Definition at line 268 of file font.cpp.
Query the width of a specific character.
Query the height of the font.
Query the kerning offset between two characters.
Definition at line 31 of file font.cpp.
Query the maximum width of the font.
Definition at line 286 of file font.cpp.
Compute and return the width the string str has when rendered using this font.
This describes the logical width of the string when drawn at (0, 0). This can be different from the actual bounding box of the string. Use getBoundingBox when you need the bounding box of a drawn string.
Definition at line 282 of file font.cpp.
[private]
Definition at line 329 of file font.cpp.
Definition at line 325 of file font.cpp.
Take a text (which may contain newline characters) and word wrap it so that no text line is wider than maxWidth pixels.
If necessary, additional line breaks are generated, preferably between words (i.e. where whitespaces are). The resulting lines are appended to the lines string list. It returns the maximal width of any of the new lines (i.e. a value which is less or equal to maxWidth).
Definition at line 321 of file font.cpp. | https://doxygen.residualvm.org/d7/d65/classGraphics_1_1Font.html | CC-MAIN-2019-47 | refinedweb | 496 | 78.25 |
- Training Library
- Amazon Web Services
- Courses
- Introduction to Amazon Kinesis
Introduction to Amazon Kinesis
Contents
Introduction to Amazon Kinesis
The course is part of these learning paths
In this introductory course, you will learn to recognize and explain the core components of the Amazon Kinesis Service and where those services can be applied.
- Introduction to Amazon Kinesis Streams
- Introduction to Amazon Kinesis Firehose
- Introduction to Amazon Kinesis Analytics
Intended audience:
- People working with Big Data
- Business intelligence
- DevOps
- Development
Learning Objectives:
- Recognize and explain the core components of the Amazon Kinesis service (streams, firehose, analytics)
- Recognize and explain the common use cases telemetry data, and more into your databases, your. In. AWS Kinesis is primarily designed to deliver processing orientated around real-time streaming. One of the interesting things when we look at the storage patterns is that Amazon Kinesis does not store persistent data itself, unlike many of the other Amazon big data services. AWS Amazon Kinesis needs to be deployed as part of a larger solution where you define a target big data solution that will store the results of the streaming process. Note that each Amazon Kinesis Firehose delivery stream stores data records for up to 24 hours in case the delivery destination is unavailable, and the Kinesis Stream stores records from 24 hours by default but this can be extended to retain the data for up to seven days. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds or thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events. Amazon Kinesis provides three different solution capabilities. Amazon Kinesis Streams enables you to build custom applications that process or analyze streaming data for specialized needs. Amazon Kinesis Firehose enables you to load streaming data into the Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and the Amazon Elasticsearch services. Amazon Kinesis Analytics. Unlike some of the other Amazon big data services which have a container that the service sits within, for an example a DB instance within Amazon RDS, Amazon Kinesis doesn't. The container is effectively the combination of the account and the region you are provisioning the Kinesis streams within. An Amazon Kinesis stream is an ordered sequence of data records. A record is the unit of data in an Amazon Kinesis stream. Each record in the stream is composed of a sequence number, a partition key, and a data blob. A data blob is the data of interest your data producer adds to a stream. The data records in the stream are distributed into shards. A shard is the base throughput unit of an Amazon Kinesis stream. One shard provides a capacity of one megabyte per second of data input and two megabytes per second of data output, and can support up to 1,000 put-records per second. You specify the number of shards needed when you create a stream. The data capacity of your stream is a function of the number of shards that you specify for that stream. The total capacity of the stream is the sum of the capacity of its shards. If your data rate increases, you can increase or decrease the number of shards allocated to your stream. The producers continuously push data to Kinesis Streams and the consumers process the data in real-time. For example a web service sending log data to a stream is a producer.
Consumers receive records from Amazon Kinesis Streams and process them. These consumers are known as Amazon Kinesis Streams applications. Consumers can store the result using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3. An Amazon Kinesis application is a data consumer that reads and processes data from an Amazon Kinesis Stream and typically runs on a fleet of EC2 instances. You need to build your applications using either the Amazon Kinesis API or the Amazon Kinesis Client Library or KCL. Before we go into each option and detail, let's have a quick look at how AWS makes things easier for you. One of the great things about AWS is they always try and make things easy for you. So when you go to create a new Amazon Kinesis Stream definition in the console, there are a couple of simple parameters we need to complete to create the stream. We just need to enter in a stream name and the number of shards and then we are ready to go. An Amazon Kinesis stream is an ordered sequence of data records. Each record in the stream has a sequence number that is assigned by Kinesis Streams. A record is the unit of data stored in the Amazon Kinesis stream. A record is composed of a sequence number, partition key, and data blob. A data blob is the data of interest your data producer adds to a stream. The maximum size of a data blob, the data payload before Base64 encoding is one megabyte. A partition key is used to segregate and route records to different shards of a stream. The Kinesis Streams service segregates the data records belonging to a stream into multiple shards using the partition key associated with each data record to determine which shard a given data record belongs to. Partition keys are Unicoded streams with a maximum length of 256 bytes. An MD5 hash function is used to map partition keys to a 128-bit integer value and to map associated data records to shards. A partition key is specified by your data producer while adding data to an Amazon Kinesis stream. For example, assuming you have a stream with two shards, shard one and shard two, you can configure your data producer to use two partition keys, key A and key B, so that all records within key A are added to shard one, and all records with key B are added to shard two. A sequence number is a unique identifier for each record. Sequence numbers are assigned by Amazon Kinesis when a data producer calls PutRecord or PutRecords operation to add data to an Amazon Kinesis stream. Sequence numbers for the same partition key generally increase over time. The longer the time period between PutRecord or PutRecords requests, the larger the sequence number becomes. A shard is a group of data records in a stream. When you create a stream, you specify the number of shards for the stream. Each shard can support up to five transactions per second for reads and up to a maximum total data read rate of two megabytes per second, and up to a 1,000 records per second for writes and up to a maximum total data write rate of one megabyte per second including partition keys. The total capacity of a stream is the sum of the capacities of its shards. You can increase or decrease the number of shards in a stream as needed, however note that you are charged on a per shard basis. Before you create a stream, you need to determine an initial size for the stream. After you create the stream, you can dynamically scale your shard capacity up or down using the AWS Management Console or the UpdateShardCount API. You can make updates while there is an Amazon Kinesis Stream application consuming data from the stream. You can calculate the initial number of shards you need to provision using the formula at the bottom of the screen. Kinesis Streams support changes to the data record retention period for your stream. A Kinesis. And Kinesis stream stores records from 24 hours by default up to 168 hours. Note though additional charges apply for streams with a retention period set above 24 hours. Kinesis Streams supports re-sharding which enables you to adjust the number of shards in your stream in order to adapt to changes in the rate of data flow through the stream. There are two types of re-sharding operations, a shard split and a shard merge. As the names suggest, in a shard split you divide a single shard into two shards, in a shard merge you combine the two shards into a single shard. You cannot split into more than two shards in a single operation, and you cannot merge more than two shards in a single operation. The shard or pair of shards that the re-sharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the re-sharding operation are referred to as child shards. After you call a re-sharding operation, you need to wait for the stream to become activated. Remember Kinesis Streams is a real-time data streaming service which is to say that your application should assume that the data is continuously flowing through the shards in your stream.
When you re-shard, data records that were flowing to the parent shards are rerouted to flow to the child shards based on the hash key values that the data record partition keys mapped to. However any data records that were in the parent shards before the re-shards remain in those shards. In other words the parent shards do not disappear when the re-shard occurs. They persist along with the data they contained prior to the re-shard. A producer puts data records into Kinesis streams. For example a web server sending log data to a Kinesis stream is a producer. A consumer processes the data records from a stream. key should typically be much greater than the number of shards. This is because the partition key is used to determine how to map a data record to a particular shard. If you have enough partition keys, the data can be evenly distributed across the shards in a stream. A consumer gets records from the Kinesis stream. A consumer, known as an Amazon Kinesis Streams application, processes the data records from a stream. You need to create your own consumer applications. Each consumer must have a unique name that is scoped to the AWS account and region used by the application. This name is used as a name for the control table in Amazon DynamoDB and the namespace for Amazon CloudWatch metrics. When your application starts up, it creates an Amazon DynamoDB table to store the application state, connects to the specified stream, and then starts consuming data from the stream. You can view the Kinesis stream metrics using the CloudWatch console. You can deploy the consumer to an EC2 instance. You can use the Kinesis Client Library or KCL to simplify parallel processing of the stream by a fleet of workers running on a fleet of EC2 instances. The KCL simplifies writing code to read from the shards in the stream and it ensures that there is a worker allocated to every shard in the stream. The KCL also provides help with fault-tolerance by providing check-pointing capabilities. Each consumer reads from a particular shard using a shard iterator. A shard iterator represents the position in the stream from which the consumer will read. When they start reading from a stream, consumers get a shard iterator which can be used to change where the consumers read from the stream. When the consumer performs a read operation, it receives a batch of data records based on the position specified by the shard iterator. There are a number of limits within the Amazon Kineses Streams service you need to be aware of. Amazon Kinesis imposes limits on the resources that you can allocate and the rate at which you can allocate resources. The displayed limits apply to a single AWS account. If you require additional capacity, you can use the standard Amazon process to increase the limits of your account where the limit is flagged as adjustable. Note the maximum number of shards differ between US East, US West, and EU compared to all the other regions. While still under the Kinesis moniker, the Amazon Kinesis Firehose architecture is different to that of Amazon Kinesis Streams.
Amazon Kinesis Firehose is still based on a platform as a service style architecture where you determine the throughput of the capacity you require and the architecture and components are automatically provisioned and stored and configured for you. You have no need or ability to change the way these architectural components are deployed. Amazon Kinesis Firehose is a fully-managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service or S3, Amazon Redshift, or Amazon Elasticsearch service. With Kinesis Firehose you do not need to write applications or manage resources. You configure your data producers to send data to Kinesis Firehose and it automatically delvers the data to the destination that you specify. You can also configure Amazon Kinesis Firehose to transform your data before data delivery. Unlike some of the other Amazon big data services which have a container that the service sits within, for example a DB instance within Amazon RDS, Amazon Kinesis Firehose doesn't. The container is effectively the combination of the account and the region you provision the Kinesis delivery streams within. The delivery stream is the underlying entity of Kinesis Firehose. You use Kinesis Firehose by creating a Kinesis Firehose delivery stream and then sending data to it, which means each delivery stream is effectively defined by the target system that receives the restreamed data. Data producers send records to Kinesis Firehose delivery streams. For example a web service sending log data to Kinesis Firehose delivery stream is a data producer. Each delivery stream stores data records for up to 24 hours in case the delivery destination is unavailable. The Kinesis Firehose destination is the data store where the data will be delivered. Amazon Kinesis Firehose currently supports Amazon S3, Amazon Redshift, and Amazon Elasticsearch as service destinations. Within the delivery stream is a data flow which is effectively the transfer and load process. The data flow is predetermined based on what target data source you configure your delivery stream to load data into. So for example if you are loading into Amazon Redshift, the data flow defines the process of landing the data into an S3 bucket and then invoking the copy command to load the Redshift table. Kinesis Firehose can also invoke an AWS Lambda function to transform incoming data before delivering it to the selected destination. You can configure a new Lambda function using one of the Lambda blueprints AWS provides or choosing an existing Lambda function. Before we go into each of the options in detail, let's have a quick look at how AWS makes things easier for you. One of the great things about AWS is that they always try and make things easy for you. So when you go to create a new Amazon Kinesis Firehose definition in the console, there are a number of prebate destinations that will help you with streaming data into a AWS big data storage solution. As you can see, you can select one of the three data services currently available as a target, S3, Redshift, or Elasticsearch. Selecting one of these destinations will create additional parameter options for you to complete to assist in creating the data flow. If we chose Amazon S3 as a destination data source, then the relevant parameters are displayed to be completed. If we chose Amazon Redshift as a destination target, you can see we get a different set of parameters as you would expect. Note that we are required to define both an S3 bucket and a Redshift target database in this scenario as Amazon Kinesis Firehose is leveraging the Amazon Redshift copy capability to load the data. Going back to the Amazon S3 scenario, we also have the ability to consume an AWS Lambda function as part of the loading process to transform the data on the way through. AWS makes Kinesis Firehose simple to use by predefining the data flows that are required to load data into the destinations. For Amazon S3 destinations, streaming data is delivered to your S3 bucket. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket. For Amazon Redshift destinations, streaming data is delivered to your S3 bucket first. Kinesis Firehose then issues an Amazon Redshift copy command to load data from your S3 bucket to your Amazon Redshift cluster. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket. Note that you need to configure your Amazon Redshift cluster to be publicly accessible and unblock the Kinesis Firehose IP addresses. Also note that Kinesis Firehose doesn't delete the data from your S3 bucket after loading it to your Amazon Redshift cluster. For Amazon Elasticsearch destinations, streaming data is delivered to your Amazon Elasticsearch cluster and can optionally be backed up to your S3 bucket concurrently. There are a number of limits within Amazon Kinesis Firehose service you need to be aware of. Amazon Kinesis imposes limits on resources that you can allocate at the rate at which you can allocate resources. The displayed limits apply to a single AWS account. If you require additional capacity, you can use the standard Amazon process to increase the limits for your account where the limit is flagged as adjustable., the ability to use existing analytical tools based on S3, Amazon Redshift, and Amazon Elasticsearch with data latency of 60 seconds or higher. You use Firehose by creating a delivery stream to a specified destination and send data to it, you do not have to create a stream or provision shards, you do not have to create a custom application as the destination, and you do not have to specify partition keys unlike Streams. But Firehose is limited to S3, Redshift, and Elasticsearch as the data destinations. Okay so as we come to the end of this module on AWS Kinesis, let's have a quick look at a customer example from AWS where Amazon Kinesis has been used. Sushiro uses Amazon Kinesis to stream data from sensors attached to plates in its 380 stores. This is used to monitor the conveyor belt sushi chain in each store and is used to help decide in real time what plates chefs should be making next. It's one of my favorite examples. So that brings us to the end of the Amazon Kinesis module.
. | https://cloudacademy.com/course/intro-amazon-kinesis/introduction-to-amazon-kinesis/ | CC-MAIN-2021-17 | refinedweb | 3,075 | 51.18 |
Type inference in C++
In this tutorial, we are going to learn about type inference in C++. Type inference actually means automatic deduction of the type of the variable we are using. In C++, we have methods to detect the type of data automatically. This feature strengthens the use of C++.
Type inference in C++ can be useful for many instances. However, it does increase the compile time as types are deduced while the code is being compiled. The methods of type inference are given below.
auto keyword
We can use the auto keyword in our program to deduce the data type of variable. We can also use it as the return type for a function. In that case, the return type for the function is evaluated at the run time.
See the below code to grasp it well.
#include <bits/stdc++.h> using namespace std; int main() { auto a = 5; auto b = 6.0; auto c = 7.2f; cout << "Type of " << a << " is " << typeid(a).name() << "." << endl; cout << "Type of " << b << " is " << typeid(b).name() << "." << endl; cout << "Type of " << c << " is " << typeid(c).name() << "." << endl; return 0; }
The above program gives the output as:
Type of 5 is i. Type of 6 is d. Type of 7.2 is f.
As you can see in the output auto keyword successfully detects the type for a, b and c variables.
A more common use of auto can be seen while creating iterators. See here.
#include <bits/stdc++.h> #include <vector> using namespace std; int main() { vector<int> vec = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; for(auto it = vec.begin(); it != vec.end(); it ++) { cout << *it << " "; } return 0; }
Output:
1 2 3 4 5 6 7 8 9 10
decltype keyword
This keyword evaluates an expression and finds out the type of the result obtained. This can also be very useful in type inference. See the below code for a better understanding.
#include <bits/stdc++.h> using namespace std; float fun() { return 5.9; } int main() { int i; decltype(9.5) a; decltype(i) b; decltype(fun()) c; cout << typeid(a).name() << endl; cout << typeid(b).name() << endl; cout << typeid(c).name() << endl; return 0; }
As you can see, the type of a is d(double) type of b is i(int) and type of c is f(float). We can also use decltype keyword with auto keyword. That will make our code more efficient and useful.
Thank you.
Also read: Keywords in C++ | https://www.codespeedy.com/type-inference-in-cpp/ | CC-MAIN-2020-29 | refinedweb | 417 | 77.23 |
Before starting, I must say that this is an advanced topic. I tried to make it simple and correct what, in my opinion, are common errors when explaining the process of compilation.
But, either way, the virtual machine has assembler(-like) instructions. So, being assembler, it is a hard topic. Knowing any other assembler programming language will really help understand this article.
So, if you feel comfortable with assembler, or want to see if my explanation is good, continue reading.
In this article, I will try to explain step-by-step about how to build a virtual machine and a C# like compiler for such a virtual machine.
The name (POLAR) was chosen because it is easy to remember, because I like freezing temperatures and because I managed to create a meaning for it.
Note: At the end of the article, there is a How to Use POLAR in Your Own Projects. So, if you are not interested in how it works but want to add scripting support to your projects, take a look.
I like to recreate things. Many say that it is useless, but it helps me understand how things work and possibly why they are how they are. This also helps me solve problems to which I can't find built-in solutions or put in practice some of my theories. I do that so often that many of my friends say that I should create a new language or a new operating system.
Well... I don't expect to create a very popular language, but why not give it a try?
I already helped some friends that needed to create "compilers" for educational purposes. But something I didn't like in the normal approach is the flow of things.
In general, people start by the text, something like this:
public static long Test(long x, long y)
return (x+x)/y;
Then they must transform such text into "tokens", composed of type and possibly a content, something like:
And finally, they interpret or further compile such code.
Those steps are necessary, but my problem is the order in which they are done.
First, many people don't understand why it is important to transform text into tokens. At first look, the tokens are harder to understand than the text representation (at least for most humans).
Second, we don't have a "target" defined. Will we start interpreting the tokens directly? Will we create something else with those tokens that we will call as "compiled code"?
Third, and this is the worst: After converting text into tokens, we can't test anything as those tokens are a mid-step. Until we create some kind of interpreter, the tokens look like a complete waste of time.
If my site is working, you probably saw the very final result. But let's explain it:
My personal approach was to create the virtual machine before everything else. I really did that. But in the article, I decided to go one step further. What I will present to you is a language which the following features:
var item = new Class();
Class item = new();
actualexception
yield return
void
finally
required
null
ArgumentNullException
What's your name?
<<You write your name here>>
Your name is: <<The name you wrote>>
...was made from instructions like this:
.localcount 2
PutValueInLocal "What's your name?", 0
PrepareDotNetCall Script:WriteLine
SetParameter 0, 0 // sets the parameter 0 with the value at local 0.
CallMethodWithoutReturn
PrepareDotNetCall Script:ReadLine
CallMethodWithReturn 1 // put the result at local 1.
PrepareDotNetCall Script:Write
PutValueInLocal "Your name is: ", 0
SetParameter 0, 0
CallMethodWithoutReturn
PrepareDotNetCall Script:WriteLine
SetParameter 0, 1 // set the parameter 0 to the local 1 (the name)
CallMethodWithoutReturn
ReturnNoResult
Which started as code like this:
public static void Main()
{
Script.WriteLine("What's your name?");
string name = Script.ReadLine();
Script.Write("Your name is: ");
Script.WriteLine(name);
}
And, after making that simple example work, adding new functionalities and testing, it is possible to create a game like:
When I first thought about the Virtual Machine, I thought about many instructions using a byte representation, so I could easily interpret it using a switch statement or by having a list of "actions" and using the byte value as the index of the instruction to run.
switch
I certainly prefer the object oriented approach, so a list of Actions (or an object of some interface) will do the job.
So, in a very resumed way, I could have something like this:
Byte Sequence: 0 0 1
The first byte will mean: Put Value. Such instruction should know how to read the next two bytes.
The second byte means: At register 0.
The third byte means the value. 1. So, put the value 1 at register 0.
But even before writing a single line of code, I already found new problems:
Simple - The binary representation is not the runnable representation.
In memory, instructions will be full objects with all parameters they need. Surely when writing "executables" we can use binary representations, but in memory a method is composed of a list of instructions with all they need (information on the method to call, values to set, or more commonly locals to read from and write to).
An instruction, then, is any class that implements a "Run" method.
Run
Finally (at that moment, it was finally... but far from final), I decided to write some code.
I created the PolarVirtualMachine class that received a list of IPolarInstructions and had 256 registers (a long array with 256 positions) and, of course, I created the IPolarInstruction interface, with a Run method that received a reference to the virtual machine, so it could access the registers.
PolarVirtualMachine
IPolarInstruction
The body of the virtual machine was something like this:
public sealed class PolarVirtualMachine
{
public int InstructionIndex { get; internal set; }
internal long[] _registers = new long[256];
public void Run(List<IPolarInstruction> instructions)
{
int count = instructions.Count;
while(InstructionIndex < count)
{
IPolarInstruction instruction = instructions[InstructionIndex];
InstructionIndex++;
instruction.Run(this);
}
}
}
I needed some instruction to test, so I created the PutValueInLocal and the Add instructions. I needed to look at the _registers to see if that worked, but that's enough for the first test.
PutValueInLocal
Add
_registers
Then, next, I created the GotoDirect and the conditional Gotos, already preparing to support ifs, whiles and fors.
GotoDirect
Goto
if
while
for
Here they are:
public sealed class Add:
IPolarInstruction
{
public byte Register1 { get; set; }
public byte Register2 { get; set; }
public byte ResultRegister { get; set; }
public void Run(PolarVirtualMachine virtualMachine)
{
long[] registers = virtualMachine._registers;
registers[resultRegister] = registers[Register1] + registers[Register2];
}
}
public sealed class GotoDirect:
IPolarInstruction
{
public int InstructionIndex { get; set; }
public void Run(PolarVirtualMachine virtualMachine)
{
virtualMachine.InstructionIndex = InstructionIndex;
}
}
Maybe you are already thinking this is useless. I am writing a lot of code only to add some values.
But, hey, that's what happens with any interpreted language. A lot of instructions are needed to interpret a single virtual instruction and then execute it. Our advantage, at this moment, is that we can already see some results. We didn't do a text parser and we can already see the instructions working (well... thanks to the debugger... but...) we are going in the right direction.
At this moment, there's a lot of things to do. We need a lot of instructions to make the virtual machine usable.
We need some way of presenting things. I can dedicate a "virtual address" as its screen, or I can create instructions to interact with .NET already existing classes (well, by seeing the sample, you can imagine what I decided to do).
And at that moment, I wanted so much to see some results that I even forgot to think about the "Virtual Machine architecture".
Did you see that I used 256 registers? Well, .NET IL and Java don't have registers. They are completely based on the stack. For a simple task like add, they stack values and call Add. Then, the values are unstacked and the result is stacked.
The actual Virtual Machine doesn't have any kind of stack. It is simply incapable of calling another method and, even if I add such instruction, how will I save the local variables of the actual method?
So, even if I had a lot of instructions to create yet, I decided it was time to chose an architecture.
Seeing that .NET and Java are stack-based made me think about that... but I really like registers. They were my initial thinking about local variables.
Maybe that was not the best decision (should I say... again?), but I decided to use stack-frames and locals without making the machine stack-based.
The basic decision is:
.NET already uses the concept of locals, but it needs to load locals (push them to the stack) and store them (pop the value from the stack to a local). Also, in .NET, a method argument is not a local (it requires another instruction to load it [stack it]).
But, when I program in C#, C, Pascal and others, a method parameter is a local and I can even change its value. So I think my decision is more similar to how people usually see the variables. If they come from a parameter or if it is declared inside the method body, it is simply a "local" variable.
And, by preparing a method call before calling it, I have the option to fill all default values. So, default values will be supported at the "compiled" level. This contrasts with .NET, that supports default values at compile time, but at run-time they must be set.
So, the new Virtual Machine, Add and GotoDirect classes look like this:
public sealed class PolarVirtualMachine
{
internal _PolarStackFrame _actualFrame;
public void Run(PolarMethod method)
{
if (method == null)
throw new ArgumentNullException("method");
_actualFrame = new _PolarStackFrame();
_actualFrame._method = method;
if (method.LocalCount > 0)
_actualFrame._locals = new object[method.LocalCount];
while(true)
{
var instructions = method.Body;
int instructionIndex = _actualFrame._instructionIndex;
_actualFrame._instructionIndex++;
var instruction = instructions[instructionIndex];
instruction.Run(this);
if (_actualFrame == null)
return;
method = _actualFrame._method;
}
}
}
public sealed class Add:
IPolarlInstruction
{
public int FirstLocalIndex { get; set; }
public int SecondLocalIndex { get; set; }
public int ResultLocalIndex { get; set; }
public void Run(PolarVirtualMachine virtualMachine)
{
var actualFrame = virtualMachine._actualFrame;
var locals = actualFrame._locals;
long value1 = (long)locals[FirstLocalIndex];
long value2 = (long)locals[SecondLocalIndex];
long result = value1 + value2;
locals[ResultLocalIndex] = result;
}
}
public sealed class GotoDirect:
IPolarlInstruction
{
public int InstructionIndex { get; set; }
public void Run(PolarVirtualMachine virtualMachine)
{
virtualMachine._actualFrame._instructionIndex = InstructionIndex;
}
}
And we also have the PrepareCall, SetParameter and CallWithoutResult. I will not present all of them, as looking at the source code will probably be enough. The most important thing is to know that PrepareCall creates the next stack frame, but does not make it the active one. The SetParameter copies a value from the actual stackframe to the next one and the CallMethod (be with or without result) will make the next frame the active one.
PrepareCall
SetParameter
CallWithoutResult
CallMethod
To test it, I was creating two lists of instructions as the methods, and one of them was calling the other. The PrepareCall, at that moment, received a List<PolarInstruction> as the method. Later, I changed it to the actual one. I really believe that if you want to look at it, then load the appropriate sample in the download section, as that will occupy too much space here in the article.
List<PolarInstruction>
Now we can call other methods. The default values are loaded by default, we can still set the non-default values and then call the method.
But we don't have the concept of classes in our language. In fact, our methods are simple functions. I am sure I need to add support for classes, methods, fields, virtual methods and so on.
We also don't have exception handling; We don't have stack manipulation (well, except from real assembler, I think no language has it, but I wrote an article about it and wanted to prove the concept).
In fact, there is not a single way to follow. I can finish all of that, without making a compiler, or I can create the compiler, even if the Virtual Machine is not complete yet.
Guess what I am going to do.
I thought about writing the compiler... but... I like to prove things. I published the article Yield Return Could Be Better in which I say that yield return could be simpler to implement and more versatile if stack manipulation was available.
As stack manipulation is similar to Exception Handling, but more complete, I decided to start by exception handling.
Exception handling is, well, an exception in the natural control flow. When an exception is thrown, the machine "returns" until the next catch or finally block. It does not care if it will simple jump to the appropriate instruction or if it will need to discard many stackframes.
catch
In my implementation, the "try" is the PushCatchFinally. To make it work, I needed to add the notion of the "CatchFinallyHandler" in the stackframe. Considering a method can have a try inside a try, the tries are "pushed" or "stacked" (but it uses references to the old one, not a real Stack object).
try
PushCatchFinally
CatchFinallyHandler
try
Stack
After finishing a try block, a PopCatchFinally should be called and, as there is a "finally" block, it gets unstacked and is then made as the active instruction. The PushCatchFinally already told which instruction to call in case of an error, a PopCatchFinally or a return.
PopCatchFinally
finally
PopCatchFinally
return
In fact, there is only a finally block. But inside such finally block, the exception can be checked and, at the end, the exception can be killed or rethrown. So, it is possible to simulate catch and finally with it.
catch
The StackFrame also has the "actual exception" there. At any moment, the actualexception could be checked, even on methods that do not receive the exception as a parameter.
StackFrame
The instructions to deal with the exceptions are:
Rethrow
catch/finally
PutActualExceptionInLocal
null
FinishCatchFinally
My idea on the stack manipulation is to "call a method creating a save point". So, at any moment the called method can return to its caller, independent on how many stack levels it needs to go back.
This is similar to an exception. But when an exception happens, we lose all the stack frames from which we've returned from. With the stacksaver, we keep such stack frames saved, so we can continue on such method later.
stacksaver
To make that possible, there is a StackSaver class, which knows where the method started (the first stackframe for a given method), where it is now (as we can return from the middle of an inner method) and then the return should be changed to check if it is inside a stacksaver (to consider it as ending). To change the return, I also changed the stackframe to know if it is the first stackframe of a stacksaver.
StackSaver
Even if it looks hard by the explanation, it is something like this:
Imagine that method A calls method B. Method B then creates a stacksaver for method C.
It then calls method C, which calls method D, which in turn yield returns.
When yield returning, we return to method B, skipping the rest of the method D and C, but such return saves the stack-frame of D and tells that there is more to execute. Method B can then run naturally, and when it calls the stacksaver again, method D will continue. If method D returns (not yield returns), it will return to C. If C returns, then it will mark the stacksaver as finished, and will return the call with the value of false, telling that it has finished.
yield
false
One of the possible problems of yield returning is that we are returning a method before it is really finished, and are ignoring any finally blocks when doing so. This can lead to resource leaks.
Well... in my actual implementation, B could return the stacksaver to A (the original method), so I should not check for "lost" stacksavers at the end of B. But when finishing the first method (the Main method) of the application, I check for such stacksavers.
Main
My solution was to simply run them until they finish. Maybe a better approach will be to throw something like ThreadAbortException while executing them, so they execute the finally blocks without executing everything, after all they were lost. Well... I didn't do that... but maybe if I have enough requests, I will change such approach.
ThreadAbortException
Well... before compiling, I personally added a lot of other instructions and tested them directly. But... I think there are too many instructions to show them individually, so I will continue with the compilation process, as it is the one that makes the virtual machine really usable.
The process of compiling is the process of making text become instructions. Note that saving such instructions to disk is not part of the compilation.
Usually people compile to a binary representation and, specially when generating native code, that's completely correct.
But I am not generating native code as the "list of instructions" in memory is the most "natural" representation that the Polar Virtual Machine understands.
Transforming such instructions to bytes and vice-versa is a process of serialization. Surely I could've made my compiler compile directly to bytes... but I haven't even decided how the "executables" will look like. So, I prefer to compile to instructions and, at another time, I decide how the binary will look like, how to convert from instructions to bytes and from bytes to instructions.
The Compilation is divided into two big phases:
To simplify things. When we see "long someVariable = 10;" we see the entire "long" as one entity, then the "someVariable" as another entity, the = as another one, the 10 as another one and the ";" (semi-colon) as the last one.
long someVariable = 10;
long
someVariable
=
10
;
Well, that's what the token recognition tries to do for the computer. Independent if the word or value takes 1 or 100 characters, it makes that as a single Token.
For effective compilation, if we are expecting a name we don't need to read many characters. We simply read the next token, if it is not a name then we generate an error, and if it is, then it is OK. That's only possible because the tokens were discovered by another "step" that usually comes earlier. I say usually because it is possible to make them work together, as when generating enumerators, but that complicates things again and I wanted it to be simple.
So, what should the tokenizer identify?
Keywords, arithmethic operations, "block" constructs and so on.
I decided to make my programming language similar to C#, so I already had an idea of the keywords, symbol constructs and so on. For personal reasons (that I will explain later), I decided to divide names that start with lowercase and uppercase into two different types of tokens.
Recognizing tokens is like this:
Read a character.
While it is a space or tab, continue reading.
When it is an enter, increase the line number.
As soon as we see a different thing, we try to analyze it further:
switch
i
int
LowerName
Even if the parsing to token process is not hard, I consider such work to be very, very boring.
Probably you want to see some code. Well, I will not show the entire code here, but to give you an idea, here is a small part of it:
// This is the main parser method
private void _CreateTokens()
{
_lineNumber = 1;
for(_position=0; _position<_length; _position++)
{
char c = _unit[_position];
switch(c)
{
case '\n':
_lineNumber++;
continue;
case '\r':
case ' ':
case '\t':
continue;
case '{': _AddToken(_TokenType.OpenBlock, "{"); continue;
case '}': _AddToken(_TokenType.CloseBlock, "}"); continue;
case '(': _AddToken(_TokenType.OpenParenthesis, "("); continue;
case ')': _AddToken(_TokenType.CloseParenthesis, ")"); continue;
case '[': _AddToken(_TokenType.OpenIndexer, "["); continue;
case ']': _AddToken(_TokenType.CloseIndexer, "]"); continue;
case '.': _AddToken(_TokenType.Dot, "."); continue;
case ',': _AddToken(_TokenType.Comma, ","); continue;
case ';': _AddToken(_TokenType.SemiColon, ";"); continue;
case ':': _AddToken(_TokenType.Colon, ":"); continue;
case '~': _AddToken(_TokenType.BitwiseNot, "~"); continue;
case '+': _Add(); continue;
case '-': _Subtract(); continue;
case '*': _Multiply(); continue;
case '/': _Divide(); continue;
case '^': _Xor(); continue;
case '>': _Greater(); continue;
case '<': _Less(); continue;
case '&': _And(); continue;
case '|': _Or(); continue;
case '%': _Mod(); continue;
case '=': _Equals(); continue;
case '!': _Not(); continue;
case '\"': _String(); continue;
case '\'': _Char(); continue;
case '0': _FormattedNumber(); continue;
}
if (c >= 'a' && c <= 'z')
{
_Name(_TokenType.LowerName);
continue;
}
if (c >= 'A' && c <= 'Z')
{
_Name(_TokenType.UpperName);
continue;
}
if (c >= '1' && c <= '9')
{
_Number();
continue;
}
throw new PolarCompilerException
("Parser exception. Invalid character: " + c + ", at line: " + _lineNumber);
}
}
// As you can see, many tokens (like the comma, semicolon etc)
// are added to the list of tokens directly, while others call methods.
// The _Add method look like this (and many methods are extremely similar to it):
private void _Add()
{
int nextPosition = _position + 1;
if (nextPosition == _unit.Length)
{
_AddToken(_TokenType.Add, "+");
return;
}
char c = _unit[nextPosition];
switch(c)
{
case '+':
_position++;
_AddToken(_TokenType.Increment, "++");
return;
case '=':
_position++;
_AddToken(_TokenType.Add_Here, "+=");
return;
}
_AddToken(_TokenType.Add, "+");
}
Fortunately, we have already tested our instructions. Because now we can only see the list of tokens, but can't show anything with them without further processing to create the instructions.
So, we should start creating instructions, but we start to have problems again. A normal unit starts by the using's clauses, then it has the namespace declarations, the type declarations and finally the members, which include the methods.
using
If I parse all of them, I will need to further implement the parser. If I don't, then at a later time I will need to do it. Well, I did a little of each, but I consider that it is best to start by the compilation of a method. The parsing of a method body is inside the method _ParseSingleOrMultipleInstructions.
_ParseSingleOrMultipleInstructions
So, let's try to compile this:
Console.WriteLine("Test");
I am sure that Console is a class. But, what if it was a property?
Console
Or a field? Or a namespace name?
In fact, we have two problems here:
static
Well. To keep things simple, I wanted to avoid any kind of ambiguity. I wanted to read a token and know immediately if:
In fact, to do that, I will end-up creating a very different language. I will probably have a language like this:
callstaticmethod Console.ReadLine();
setlocal x = 10;
Or anything similar to that, having to start every line by a keyword that told the compiler what the line should be doing. But I didn't want to change the language completely.
So, even if I can't discover everything from the very first token, I wanted to reduce ambiguity to make things easier and more organized. So I decided to use some good practices as rules:
this.
this.field
field
namespace
System.Console
System
namespace.System:Console
internal
private
static.
cast
to
as
dynamic_cast
cast to
InvalidCastException
cast as
Having all those ideas in mind (and writing some code, then revisiting it and changing things that are not quite right), the _ParseSingleOrMultipleInstructions is like this:
{
OpenBlock
_ParseMultipleInstructions
}
object
string
UpperName
this.Name
this
ReturnNoResult
The other method that I consider very problematic, big and hard to understand is the parsing of expressions. What should happen by this:
return 1 + 2 * 3;
1
1
_ParseExpression
2 * 3
a
OK, explaining each one of the possible keywords is too much for an article. But then, not explaining any is too little. So, I will try to explain how the if works.
There are two main points in how the if works. First, we need to know exactly what it does, so, how it will need to look like in "assembler" instructions.
if
Then, we need to know how to parse the tokens.
So, let's see a simple example:
// value is a boolean variable with true or false.
if (value)
{
// Do something.
}
// End instructions
In assembler instructions, let's consider that value is the local at index 1. It will look like:
GotoIfFalse 1, endInstructionIndex // checks the value at local 1,
// then jump if it is false.
// Do something instruction(s) here
// End instructions here.
The GotoIfFalse must know to which instruction index to jump. That depends on the number of assembler instructions in "Do something". To do that, as soon as we identify the if, we add the GotoIfFalse instruction, but we don't fill the InstructionIndex property. After parsing the if content, we set the index.
GotoIfFalse
GotoIfFalse
InstructionIndex
Noticed that I used the GotoIfFalse instruction? When the condition is if(true) we, in fact, do a goto if the condition is false. An instruction like "if (!value)" could be optimized as GotoIfTrue, but I don't optimize at this moment. I get the value, invert it and then call GotoIfFalse.
if(true)
goto
if (!value)
GotoIfTrue
The method to do that simple if could be something like this:
void _ParseIf()
{
// Note that another method already read the "if" token and redirected to this method.
// So, after the if, we expect an open-parenthesis, like this:
var token = NextToken();
if (token.Type != TokenType.OpenParenthesis)
throw new CompilerException("( expected.");
// Here an expression should be parsed. Again, it is possible to optimize.
// If it is if(true) we can ignore
// the if and if(false) we can skip the entire if block.
// But, I am not optimizing and I am considering
// that a variable name will be read (not a boolean expression, to simplify things)
token = NextToken();
if (token.Type != TokenType.LowerName)
throw new CompilerException("A variable name is expected.");
// Here I am getting the local variable definition by its name.
// If the local does not exist, such
// method throws an exception.
var localDefinition = GetLocal((string)token.value);
// Here I will create the GotoIfFalse instruction, that will read the
// local variable (by index).
// But at this moment I don't tell where to go to.
var instruction = new GotoIfFalse { LocalIndex = localDefinition.Index };
_instructions.Add(instruction);
// Then I parse one or more instructions, which can have inner ifs, loops and so on.
_ParseSingleOrMultipleInstructions();
// And after parsing all instructions, I discover where to jump to.
// We will jump to the next instruction after all that exists in this method.
// So, we use the Count at the instructions list.
instruction.InstructionIndex = _instructions.Count;
}
Well, it is similar, not the real code. In the real code, I check for an else after finishing the if. Considering an else exists, the GotoIfFalse will jump to the elses content, and at the end of the ifs content, it will need to add a goto to the end of the else.
else
I don't consider this to be real hard, but it is not easy either. The hardest part is debugging this code if something goes wrong, after all the instructions will be generated, and we will notice the error only when running.
I really hope the explanation of the if is enough to get the idea. The while is very similar to the if, but at the end of the block it does a goto to the beginning, re-evaluating the condition.
I started by the method body. But when parsing a unit, the compiler should start by the outer declarations and then find the inner declarations.
I can't simply identify a method and start parsing it directly. I must first identify all the methods, so when parsing method A, if it calls method B, I should know that it exists (even if it was not parsed yet).
C, C++ and Pascal solve such problem by forward declarations. If method A comes before B and nothing told the compiler that it should exist, then it is an error.
But forward declarations are ugly. They increase the amount of code the programmer should write, they increase the number of validations done, as the forward declaration can have different parameters than the effective implementation. So, to make that work, as soon as I identify that I am processing a method, I "store" all their tokens without validating them.
To identify which are the tokens is very simple: If I find a semicolon (;) before finding an open-bracket ({), then the block ended. If I find an open-bracket, then I start counting the open-brackets and close-brackets. When the count reaches 0, the block ended. At that moment, it is not needed to know what the inner tokens are or if they are valid.
0
And, in fact, the method to identify the beginning/ending of blocks is not limited to method, it identifies any block, be it a type declaration, an inner namespace and so on.
The real unit parsing is like this:
goto
To make the code debugable, the compiler must generate some extra information. Such extra information may be a separate list binding instructions to line and column indexes or the instructions may be changed to have that information.
I didn't want to change the IPolarInstruction interface because that will force me to change all already created instructions and will add unnecessary information for non-debugable code. I also didn't want to create a dictionary to bind instructions to line numbers. My solution was to create the _DebugableInstruction. Such instruction holds the debug information and a reference to the real instruction to be run. This way when compiling debugable code we have that extra information and the instruction will call the Debugging event and when compiling non-debugable code, we simply avoid it completely.
_DebugableInstruction
Debugging
Then I changed all _instructions.Add lines in _MethodParser to call _AddInstruction which checks if it is generating debugable code (creating the _DebugableInstruction) or not.
_instructions.Add
_MethodParser
_AddInstruction
Well, with that, we can already create break-points but that's not completely useful, as we don't have the possibility to check the call-stack of the running application or see its local variables.
So, I made the actual stack frame and the local values visible. We still don't have variable names, callstack navigation or member evaluation, but it's a start.
Maybe I am concluding this too soon. But I really believe the big point is the explanations. To see the code, it is better to download and analyze it. There are still many things missing in the language, there are probably better ways to do the parsing (I am not really used to creating parsers) but I did add some interesting things.
For example, the break allows a count to be passed [like break(2)], so you can break many fors/whiles and similar at once. When parsing, I stack/unstack a list of actions that add the right instructions for the break and continue. So I don't have a different parser for a for (that allows break) and for a normal method body. The same method (ParseSingleOrMultipleInstructions), when it finds a break, will call the _ParseBreak. If there are no breaks stacked, it throws a PolarCompilerException.
break
break(2)
break
ParseSingleOrMultipleInstructions
_ParseBreak
PolarCompilerException
Method parameters can start by the required keyword. With that, if the parameter comes null, an ArgumentNullException will be thrown with the right parameter name. I already asked Microsoft to add such resource but they told me that it was impossible, because they compared it to the nullable types and told me that a new type with less values will not know how to auto-initialize itself... well, I am sure they understood my request wrong, because I was asking for a parameter modifier as I did here.
Maybe another time, I explain how those items work in more detail.
POLAR is already made as a library. The Polarlight sample simply uses that library (Pfz.Polar, but Pfz is also required) and makes some classes accessible to the PolarCompiler. If you look at the code, you will see this:
Pfz.Polar
Pfz
PolarCompiler
var compiler = new PolarCompiler(unit, isDebugable);
compiler.AddDotNetType(typeof(Exception));
compiler.AddDotNetType(typeof(NullReferenceException));
compiler.AddDotNetType(typeof(Script));
compiler.AddDotNetType(typeof(Player));
compiler.AddDotNetType(typeof(Shot));
compiler.AddDotNetType(typeof(Asteroid));
compiler.AddDotNetType(typeof(Message));
compiler.AddDotNetType(typeof(AnimationList));
compiler.AddDotNetType(typeof(AsteroidList));
compiler.AddDotNetType(typeof(GameComponent));
compiler.Compile();
var method = compiler.GetCompiledMethod("Polarlight", "Program", "Main");
_virtualMachine.Run(method);
In fact, PolarCompiler and PolarVirtualMachine classes are all you need. You create PolarCompiler with a source-code and telling if it will be debugable, add the .NET classes that it will have access to and then compile it.
After it is compiled, you can get a compiled method using GetCompiledMethod and, having a PolarVirtualMachine already created, you can run that method one or more times. Such method must be static, return void and should not receive any parameters.
GetCompiledMethod
static
And the Pfz and Pfz.Polar libraries already have the normal version and the Silverlight versions. I didn't do it, but I am sure they could be compiled for Windows Phone 7 by simply creating the appropriate project and adding all the source files.
Pfz.Polar
If you liked the yield return that could be used at any point, the "required" modifier for parameters or the using keyword used over methods without creating IDisposable objects, you may want to ask Microsoft to add such resources.
yield return
using
IDisposable
I got the images used in POLAR from the internet.
The Polar-Bear image was from:.
And the Aurora Borealis was. | http://www.codeproject.com/Articles/280361/POLAR-Creating-a-Virtual-Machine-in-NET-2 | CC-MAIN-2015-32 | refinedweb | 5,572 | 55.74 |
2021 Retrospective and 2022 Plans24 Dec 2021
Version 1.0 Release Delayed
This is obvious by now, but The Force Engine version 1.0 will not hit this year. There has been a lot of progress towards completing the iMuse reverse-engineering work for version 0.9, but that will spill into early January.
2022 Plans
The iMuse work is roughly 70% complete, which means that version 0.9 is expected to land in the second week of Janurary. At that point, Dark Forces support in TFE will be feature complete and the reverse-engineering process for Dark Forces will be finished. The following few weeks will be dedicated to bug and inaccuracy fixing, with version 1.0 planned for late January or early February.
February will be spent finishing the GPU Renderer, which will handle looking up and down with proper perspective by default - though the shearing effect will be available. The initial release of the renderer will only allow for palette emulation with true color options and other effects coming later. I will talk more about the GPU renderer in a future post. In short it will allow for much better performance when running at high resolutions and refresh rates but maintain the proper look, including the way objects sort with the floor and ceiling, the way they sort with walls, the way portals enable “non-euclidean” geometry in some cases. At this point, the voxel code will also make its way into the master branch, finally properly adding voxel support.
March will see the release of an early version of the built-in level editor and other asset tools, including some initial basic support for voxel replacements. These tools will be expanded even further when working on Outlaws, including support for Outlaws engine enhancements and non-vanilla Dark Forces mods using those enhancements. Finally there will be smaller quality of life enhancements, and bug fixes. Early March will also be spent working towards the Mac and Linux release, with the help of gilmorem560 (Matthew Gilmore) and others.
Finally, once the Mac and Linux ports are working and the initial tools have been released it will be time to focusing on adding Outlaws support to TFE. Like I mentioned previously, this will include adding support for Outlaws Jedi enhancements to the level editor and support for Outlaws formats in the asset tools.
2021 Retrospective
I thought it would be interesting to look back at the 2021, in terms of TFE, and see how far we have come.
Early 2021 saw just a few commits to master. There were some improvements to the perspective correct 3DO texturing code. This feature, while it looked great, was not moved into the final code for performance reasons. There were also a few experiments with scripting, though mainly for future work. The main focus at this point, however, was the reverse-engineering work. At this point I was working in two locations - a branch of the main TFE code base, and the “code document” where the raw reverse-engineered code lived before being refactored and cleaned up for TFE.
Breaking Everything
TFE had existed in this strange state for some time where things seemed to be working fairly well but most of the code placeholder. I had originally written a sector renderer based on what was known about the Dark Forces formats, and then added the reverse-engineered classic renderer in late 2020 - but you had to use a console command to use it. I had an INF system built based on my existing understanding. But none of it was “real”. It was there so things could be tested, and initial tools could be built.
At this point, it was time for it to become “real” and, so, in early February 2021 I ripped out the renderer, the INF system, the previous object system and initial scripting support - breaking everything.
The INF System
With the old code gone, I had to spend some time to get things compiling again. During this period there was no rendering, but I could test things through the debugger. In mid to late February, I stubbed out the Dark Forces sound system and started integrating the INF code I had previously reverse-engineered. This process was not yet complete, in terms of INF, but the larger structure were there and I could finally compile and begin testing the code.
During this process, I found that the INF system also touched a lot of level data, so I begun stubbing out those interactions. Late February saw those sector functions getting integrated and becoming “real.”
Gap
Between late February and late March, there was a gap of about 1 month. Here I was focused purely on reverse-engineering the code, filling in missing pieces of the INF system, level loading, and more.
Level Loading
The last few days of March were spent porting all of the reverse-engineered Dark Forces level loading code to TFE. This meant moving code to the correct locations, such as moving code out of the INF system involving sector functions. During this phase I split out the “level data” from the renderer, INF system, and collision system. I had previously reverse-engineered the sector renderer, and it had been accessible in TFE using the console, but making it “real” - hooking it up correctly - meant finding code I missed and correcting past mistakes.
Reverse-engineering the INF system was a massive undertaking. And I wasn’t done yet. Early April would show how much work was left digging through the INF code in Dark Forces, merging the new code into TFE and fixing a seemingly never ending stream of INF bugs and issues.
Gameplay
By mid-April I had move on to the game code. Mid-April saw the integration of the player inventory, which originally had pieces of the structures in the INF system since it needed them to be stubbed out (keys and the like). April would see the introduction of the logic system, though there was still a lot more to figure out here. The player finally got its own file and the game code started to take shape. At this point the way that Dark Forces handled timing became much more clear and now the game ran at the correct speed. Mid April to mid May were dedicated to reverse-engineering the gameplay code in prepration for what was to come. But there was little activity in the TFE branch.
Collision Detection
Mid to late May was spent integrating the reverse-engineered collision detection code from Dark Forces. So far I have spoken about “moving code” and integrating as if it is a one way process. It is not. As reverse-engineered code is integrated, it needs a place to live. Because I don’t have access to the original files, function names, variable names, structure or member names during this process - it all lives together in my “code document” as a mass of code. As I integrate it into TFE, I have to figure out how to organize the files and integrate the code with already existing code. Then I see what I missed, what parts I forgot to reverse-engineer, or mistakes I made. Then I would go back to the “code document” and original game and then work through my mistakes or missing code.
Hit Effects and Projectiles
During this period the Hit Effects system was also integrated - this is the system that allows projectiles and other systems to spawn animated effects on hit. It handles explosions, “puffs” as projectiles hit, and splashes when objects hit water. The other side of this was the projectile system, which is responsible for spawning projectiles, updating them, handling collision detection, and then spawning hit effects. Projectiles use an update callback which gets assigned when they are created. This allows thermal detonators to arc, mines to falls, and updates blaster bolts as they move. In late May the Sound System was finally fully stubbed out and the API took shape.
Logics and Pickups
In April there was some initial work with object logics and this work continued into June. During this period the animation logic was added, which allows objects like the shield pickups to animate. The projectile logic function fully formed, connecting projectiles to the logic system. In Mid-June I finally factored out the object/INF messaging system from the INF code. Originally I thought it was an INF feature since a lot of INF interactions are done by passaging messages to sectors, lines, and triggers. But it turns out the system is also used to pass messages to objects.
Late June saw the integration of the “pickup” update function, which meant it was now possible for the player to pick up objects, such as ammo and shields.
The Task System and Game Loop
During the previous few months of work, it was becoming increasingly clear that game behavior was too dependent on the original “task system” for me to ignore it. It was, at this point - now July - that I began the very painful task of reverse-engineering and integrating the task system. Late July saw the introduction of the main game entry point - “darkForcesMain”. This was using the new “game system” that will allow TFE to run different games. This month saw a massive refactoring to use the proper reverse-engineered game loop. By the end of July the core game loop was taking shape.
The first two thirds of August was spent reverse-engineering more game code but also saw the file searching abstracted to make file handling simpler and to handle mods. But towards the end, there was a massive amount of code integrated into TFE. This included a lot more refactoring, moving Jedi related code TFE_Jedi/, converting the various engine-level namespaces to TFE_Jedi, and cleaning up the Jedi Renderer.
Towards the end of August there was a lot of work integrating HUD code, including off-screen buffers that Dark Forces uses while updating the HUD to avoid redrawing all of the HUD elements every frame, moving over more Jedi memory management code to make porting reverse-engineered code easier, starting to properly load data and startup various systems and finishing up the Dark Forces game startup.
The end of August saw the player controller integrated, as well as initial weapon code. The automap was also integrated. I also spent the time converting many systems back to tasks, which continued to have issues. At this point, level loading was finished, object in sector assignment issues were fixed and the level loading screen was displayed. Some of the AI code was integrated, though there was still a lot of work to do here.
Core Game Loop
During September the core game loop started really coming together. The code was switched to using the original sin/cos tables, which fixed various rendering issues with the Automap, palette based effects were integrated, and the HUD code was fully integrated and displayed properly. The classic renderer, reverse-engineered many months prior, was finally properly hooked up. It was finally “real.” I could see again - after almost 7 months of most of the game not displaying properly because the data was not in place and the renderer not hooked up.
With so much reverse-engineering work already done and all of the main systems coming online, things started to move quickly from here on out.
In early September the cheats were mostly finished, and the general “mission controls” were working - meaning the automap could be properly toggled, the headlamp worked, and various other features were accessible. Weapon drawing and animation were integrated. Player controls were then integrated, and then player physics and collision. At this point, I was finally able to run around the levels again with proper controls and collision. Finally the rest of the Player controller was finished. On September 12th, I posted the “TFE Core Game Loop Release Preview” video - just days after hooking up the renderer again.
The player weapon system was integrated at this point, but the individual weapon fire functions still needed the be reverse-engineered and brought over. On the 14th the Fist was itegrated, which led to fixing various bugs. At this point, I moved everything to using the TFE allocator system, so that levels could be flushed and reloaded. On the 15th the Mortar was integrated. The 16th and 17th saw the other weapons also integrated, as the general patterns became more clear.
On the 17th, after getting through most of the player weapon handling code, I posted the “TFE: Dark Forces Weapons” video.
AI
At this point I started to focus on the AI, splitting off the basic actor code I already integrated - knowing that the AI code would soon get much bigger. Initial AI work revolved around the mines - which are in essence both an AI actor and a projectile. Once mines worked, it was time to move on to exploding barrels and then generalize to exploders. Next up was scenery, which is also considered AI because it can animate and reacts to damage, causing it to change states. The mouse bot was partially completed, and then I made a slight detour to prepare for version 0.7.
Input and UI
TFE needed a system to remap keys to actions, which had been implemented previously. What hadn’t been implemented yet is the UI. So the UI was created, though not fully hooked up yet.
More AI
Late September saw a lot more AI work, with more reverse-engineering time required as gaps became evident during the integration process. Along with the AI, the Task system was being cleaned up and simplified. Finally the mouse bot was completed, but the AI journey was just getting started.
In Dark Forces, AI agents are split up into a number of actors, which all have little bits of functionality. With the introduction of the “troopers” - more of this functionality needed to be integrated.
On September 30th I posted the “TFE: AI System” video. By this point the “trooper” AI was complete, as well as the mouse bots, land mines, exploding barrels, and scenery (like the red lights in Secbase).
Flyers and Bosses
In early October I started work on “fliers” - which have yet more “actor” structures and code. At this point the AI code was complete enough that I was able to add several more enemies that had very little custom code. Then came the Sewer creature, which doesn’t use completely unique code but shared less code that any other enemy so far.
Next was the Kell Dragon, which was the first “boss” enemy to be integrated. These enemies are different than any so far in that most of the code is custom. Most of the regular enemies share code, with their initial settings determining their behaviors. But the bosses change that.
Turrets, Generators, and Vues
The Welder and Turrets were integrated, which also use mostly custom code. Fortunately they use fewer states and less code than the bosses. I also fixed some latent rendering bugs in this period and removed a lot of no longer used code. Finally VUE animations and Generator logic were integrated.
On October 14th, I posted the “TFE: VUE Animations, Enemy Generators, and Fixes” video.
Level Reloading
So far, you could only load a single level and then restart the program and load another. Mid-October saw that finally fixed with level skip cheats and by fixing level reloading issues. It also saw the ability to add new agents using the in-game UI. In late October player death and respawning was integrated, making the game loop feel more real. Jabba’s ship was now properly handled, the code for it had been previously reverse-engineered but never integrated until now.
At this point I started uploading “Pre-Core Game Loop” releases, with the idea of updating them regularly for testing until the Core Game Loop release was finally finished. When I had previously ripped out all of the old code, including the reverse-engineered classic renderer (until it could be hooked up properly), various problems were fixed and corrections made to the classic renderer. As a result, it was parred back to the original fixed-point renderer - meaning builds were in 320x200 only.
People quickly found numerous bugs, some of which are still waiting to be fixed. Work on the boss AI continued, with new pre-release builds often coinciding with a new enemy being integrated. The Input Remapping was also finished during this period.
By early November, all of the enemies were finally integrated.
Towards Version 0.7
With the enemies all in place and the core game loop complete, it was time to re-implement the floating-point version of the Jedi renderer. I used the code from my original version of the floating-point classic renderer as reference, but I re-implemented it directly from the fixed-point renderer in order to capture all of the fixes and changes in 2021.
On November 14th, I posted the “TFE: Widescreen & High Resolution Rendering” video.
Once the floating-point renderer was complete, it was time to finally prepare for the Core Game Loop release. This involved fixing more menu code, fixing crashes due to resolutions not divisible by 4, fixing various 3DO model rendering bugs, and many other issues. But the biggest new feature was the mod loader - it was finally possible to play mods using TFE.
On the November 18th, version 0.7 was released and the core game loop was complete.
Version 0.8
With the Core Game Loop complete, it was time to tackle the cutscene system. During this period I also fixed many bugs, and cleaned up the renderer code. But most of the time was working through the “Landru” system. The Landru system uses its own “actor” model for handling images and sounds. It also has its own sound and music management code. Even the display code is different then the rest of the game. There were numerous systems, such as the fading system, that needed to be converted from DOS-style while loops to state machines.
And, finally, on December 5th, I posted a video and posted the official 0.8 release.
Today
That brings us to today. I have been spending the last few weeks reverse-engineering the iMuse system, and prior to that had successfully integrated the game music module that interfaces with iMuse.
It has been a long road and a wild ride. We didn’t quite make it to version 1.0, but we came really close. The renderer, AI, INF system, in-game UI, cutscenes, game systems - all of it derived directly from the original executable using reverse-engineering, which is almost complete.
You can now watch the cutscenes, though the music still has issues, create a new agent and play the game from beginning to end. You have all of the relevant in-game UI, the mission briefings, the gameplay. Within mere weeks we will finally reach version 1.0. | https://theforceengine.github.io/blog.html | CC-MAIN-2022-05 | refinedweb | 3,177 | 61.87 |
Support for units and quantities¶
Note
The functionality presented here was recently added. If you run into any issues, please don’t hesitate to open an issue in the issue tracker.
The
astropy.modeling package includes partial support for the use of units and
quantities in model parameters, models, and during fitting. At this time, only
some of the built-in models (such as
Gaussian1D) support units, but this
will be extended in future to all models where this is appropriate.
Setting parameters to quantities¶
Models can take
Quantity objects as parameters:
>>> from astropy import units as u >>> from astropy.modeling.models import Gaussian1D >>> g1 = Gaussian1D(mean=3 * u.m, stddev=2 * u.cm, amplitude=3 * u.Jy)
Accessing the parameter then returns a Parameter object that contains the value and the unit:
>>> g1.mean Parameter('mean', value=3.0, unit=m)
It is then possible to access the individual properties of the parameter:
>>> g1.mean.name 'mean' >>> g1.mean.value 3.0 >>> g1.mean.unit Unit("m")
If a parameter has been initialized as a Quantity, it should always be set to a quantity, but the units don’t have to be compatible with the initial ones:
>>> g1.mean = 3 * u.s >>> g1 <Gaussian1D(amplitude=3. Jy, mean=3. s, stddev=2. cm)>
To change the value of a parameter and not the unit, simply set the value property:
>>> g1.mean.value = 2 >>> g1 <Gaussian1D(amplitude=3. Jy, mean=2. s, stddev=2. cm)>
Setting a parameter which was originally set to a quantity to a scalar doesn’t work because it’s ambiguous whether the user means to change just the value and preserve the unit, or get rid of the unit:
>>> g1.mean = 2 Traceback (most recent call last): ... UnitsError : The 'mean' parameter should be given as a Quantity because it was originally initialized as a Quantity
On the other hand, if a parameter previously defined without units is given a Quantity with a unit, this works because it is unambiguous:
>>> g2 = Gaussian1D(mean=3) >>> g2.mean = 3 * u.m
In other words, once units are attached to a parameter, they can’t be removed due to ambiguous meaning.
Evaluating models with quantities¶
Quantities can be passed to model during evaluation:
>>> g3 = Gaussian1D(mean=3 * u.m, stddev=5 * u.cm) >>> g3(2.9 * u.m) <Quantity 0.1353352832366122> >>> g3(2.9 * u.s) Traceback (most recent call last): ... UnitsError : Units of input 'x', s (time), could not be converted to required input units of m (length)
In this case, since the mean and standard deviation have units, the value passed during evaluation also needs units:
>>> g3(3) Traceback (most recent call last): ... UnitsError : Units of input 'x', (dimensionless), could not be converted to required input units of m (length)
Equivalencies¶
Equivalencies require special care - a Gaussian defined in frequency space is not a Gaussian in wavelength space for example. For this reason, we don’t allow equivalencies to be attached to the parameters themselves. Instead, we take the approach of converting the input data to the parameter space, and any equivalencies should be applied at evaluation time to the data (not the parameters).
Let’s consider a model that is Gaussian in wavelength space:
>>> g4 = Gaussian1D(mean=3 * u.micron, stddev=1 * u.micron, amplitude=3 * u.Jy)
By default, passing a frequency will not work:
>>> g4(1e2 * u.THz) Traceback (most recent call last): ... UnitsError : Units of input 'x', THz (frequency), could not be converted to required input units of micron (length)
But you can pass a dictionary of equivalencies to the equivalencies argument (this needs to be a dictionary since some models can contain multiple inputs):
>>> g4(110 * u.THz, equivalencies={'x': u.spectral()}) <Quantity 2.888986819525229 Jy>
The key of the dictionary should be the name of the inputs according to:
>>> g4.inputs ('x',)
It is also possible to set default equivalencies for the input parameters using the input_units_equivalencies property:
>>> g4.input_units_equivalencies = {'x': u.spectral()} >>> g4(110 * u.THz) <Quantity 2.888986819525229 Jy>
Fitting models with units to data¶
Fitting models with units to data with units should be seamless provided that the model supports fitting with units. To demonstrate this, we start off by generating synthetic data:
import numpy as np from astropy import units as u import matplotlib.pyplot as plt x = np.linspace(1, 5, 30) * u.micron y = np.exp(-0.5 * (x - 2.5 * u.micron)**2 / (200 * u.nm)**2) * u.mJy plt.plot(x, y, 'ko') plt.xlabel('Wavelength (microns)') plt.ylabel('Flux density (mJy)')
and we then define the initial guess for the fitting and we carry out the fit as we would without any units:
from astropy.modeling import models, fitting g5 = models.Gaussian1D(mean=3 * u.micron, stddev=1 * u.micron, amplitude=1 * u.Jy) fitter = fitting.LevMarLSQFitter() g5_fit = fitter(g5, x, y) plt.plot(x, y, 'ko') plt.plot(x, g5_fit(x), 'r-') plt.xlabel('Wavelength (microns)') plt.ylabel('Flux density (mJy)')
Fitting with equivalencies¶
Let’s now consider the case where the data is not equivalent to those of the parameters, but they are convertible via equivalencies. In this case, the equivalencies can either be passed via a dictionary as shown higher up for the evaluation examples:
g6 = models.Gaussian1D(mean=110 * u.THz, stddev=10 * u.THz, amplitude=1 * u.Jy) g6_fit = fitter(g6, x, y, equivalencies={'x': u.spectral()}) plt.plot(x, g6_fit(x, equivalencies={'x': u.spectral()}), 'b-') plt.xlabel('Wavelength (microns)') plt.ylabel('Flux density (mJy)')
In this case, the fit (in blue) is slightly worse, because a Gaussian in frequency space (blue) is not a Gaussian in wavelength space (red). As mentioned previously, you can also set input_units_equivalencies on the model itself to avoid having to pass extra arguments to the fitter:
g6.input_units_equivalencies = {'x': u.spectral()} g6_fit = fitter(g6, x, y) | http://docs.astropy.org/en/latest/modeling/units.html | CC-MAIN-2019-51 | refinedweb | 976 | 51.44 |
Virtually all web applications use some form of user analytics to determine which aspects of an application are popular and which are causing issues for users. Probably the most well known is Google Analytics but there are other similar services that offer additional options and features. One such service is Segment which can act as a funnel into other analytics engines such as Google Analytics, Mixpanel, or Salesforce.
In this post I show how you can add the Segment analytics.js library to your ASP.NET Core application, to provide analytics for your application.
I'm only looking at how to add client-side analytics to a server-side rendered ASP.NET Core application i.e. an MVC application using Razor. If you want to add analytics to an SPA app that uses Angular for example, see the Segment documentation.
Client-side vs. Server-side tracking
Segment supports two types of tracking: client-side and server-side. The difference should be fairly obvious:
- Client-side tracking uses JavaScript to make calls to the Segment API, to track page views, sign-ins, page clicks etc.
- Server-side tracking happens on the server. That means you can send data that's only available on the server, or that you wouldn't want to send to a client.
Whether you want server-side tracking, client-side tracking, or both, depends on your requirements. Segment has a good breakdown of the pros and cons of both approaches on their docs.
In this post I'm going to add client-side tracking using Segment to an ASP.NET Core application.
Fetching an API key
I'll assume you already have a Segment account - if not, head to and signup.
Once you have configured your account, you'll need to obtain an API key for your app. If you haven't already, create a new source by clicking Add Source on the Home screen. Select the JavaScript source, and enter all the required fields.
Once the source is configured, view the API keys for the source, and make a note of the Write key. This is the API key you will provide when calling the Segment API.
With the Segment side complete, we can move on to your application. Even though we're doing client-side tracking here, we need to do some work on the server.
Configuring the server-side components
Given I said I'm only looking at client-side tracking, you might be surprised to know you need any server-side components. However, if you're rendering your pages server side using Razor, you need a way of passing the API keys, and the user's ID to the JavaScript code. The easiest way is to write the values directly in the JavaScript rendered in your layout.
Adding the configuration
First thing's first, you need somewhere to store the API key. The simplest place would be to dump it in appsettings.json, but you shouldn't put values like API keys in there. The Segment key isn't really that sensitive (we'll be exposing it in JavaScript anyway) but out of principle, it just shouldn't be there.
Never store API keys in appsettings.json - store them in User Secrets, environment variables, or a password vault like Azure Key Vault.
Store the API key in the User Secrets JSON file for now, using a suitably descriptive key name:
{ "Segment": { "ApiKey": "56f7fggjsGGyishfuknvyfGFDfg3643" } }
Assuming you're using the default web host builder (or similar) this value will be added to your
IConfiguration object. Create a strongly-typed settings object for good measure:
public class SegmentSettings { public string ApiKey { get; set; } }
And bind it to your configuration in
Startup.ConfigureServices:
public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public IConfiguration Configuration { get; set; } public void ConfigureServices(IServiceCollection services) { services.Configure<SegmentSettings>(Configuration.GetSection("Segment")); } }
Now you've got the Segment API key available in your application, you can look at rendering the analytics.js JavaScript code.
Rendering the analytics code in Razor
The Segment JavaScript API is exposed as the analytics.js library. This library lets you send all sorts of analytics to Segment from a client, but at it's simplest you just need to do three things:
- Load the analytics.js library
- Initialise the library with your API key
- Call
page()to track a page view.
You can read about this and all the other options available in the quickstart guide in Segment's documentation. I'm going to create a partial view called
_SegmentPartial.cshtml, for rendering the JavaScript snippet. You can add this partial to your application by adding the following to your
_Layout.cshtml.
@await Html.PartialAsync("_SegmentPartial");
The Razor partial itself consists almost entirely of the JavaScript snippet provided by Segment:
@inject IOptions<SegmentSettings> Settings @{ var apiKey = Settings.Value.ApiKey } "); analytics.page(); }}(); </script>
There's a couple of things to note here. We're injecting the API key using the strongly-typed
SegmentSettings options object directly into the view, and then writing the key out using
@apiKey. This will HTML encode the output, but given we know the
apiKey is alphanumeric, this shouldn't be an issue.
This is a special case as we know the key is not coming form user input and contains a known set of safe values, but it's bad practice really. Generally speaking you should use one of the techniques discussed in the docs to inject values into JavaScript code.
If you reload your website, you should see the JavaScript snippet rendered to the page, and if you look in the debugger of your Segment Source you should see a tracking event for the page:
Associating page data with a user
You've now got basic page analytics, but what if you want to send more information. The analytics.js library lets you track a variety of different properties and events, but often one of the most important is tracking individual users. This is extremely powerful as it lets you track a user's flow through your application, and where they hit stumbling blocks for example.
User tracking and privacy is obviously a hot-topic at the moment, but I'm going to just avoid that for now. You should always take into consideration your user's expectation of privacy, especially with the recent GDPR legislation.
To associate multiple
analytics.page() and
analytics.track() calls with a specific user, you must first call
analytics.identify() in your page. You should add this call just after the
analytics.load() call and just before
analytics.page() in our JavaScript snippet.
In order to track a user, you need a unique identifier. If a user is browsing anonymously, then Segment will assign an anonymous ID automatically; you don't need to do anything. However, if a user has logged in to your app, you can associate your Segment data with them by providing a unique ID.
In this example, I'm going to assume you're using a default ASP.NET Core Identity setup, so that when a user logs in to your app, a
ClaimsPrincipal is set which contains two claims:
ClaimTypes.NameIdentifier: the unique identifier for the user
ClaimTypes.Name: the name of the user (often an email address)
For privacy/security reasons, you may not want to expose the unique id of your users to a third-party API (and the client browser). You can work around this by creating an additional unique GUID for each user, and adding an additional
Claimto the
ClaimsPrincipalon login. That's beyond the scope of this post, so I'll just use the two main claims for now.
The following Razor uses the
User property on the page to check if the current user is authenticated. If they are, it extracts the
id and
analytics.identify(), passing in the user
id, and the serialized
traits object.
@inject IOptions<SegmentSettings> Settings @using System.Security.Claims @using System.Text.Encodings.Web @using System.Security.Claims @{ var apiKey = Settings.Value.ApiKey var isAuthenticated = User?.Identity?.IsAuthenticated ?? false; if (isAuthenticated) { var id = User.Claims.First(x => x.Type == ClaimTypes.NameIdentifier).Value; var name = User.Claims.First(x => x.Type == ClaimTypes.Name).Value; var traits = new {username = name, email = name}; } } "); @if (isAuthenticated) { @:analytics.identify('@id', @Json.Serialize(traits)); } analytics.page(); }}(); </script>
Now if you login to your application, you should see an additional
identify call in the Segment Debugger, containing the
id and the additional
traits. Actions taken by that user will be associated together, so you can easily follow the steps a user took before they ran into an issue for, example.
There's rather more logic in this partial than I like to see in a view so I suggest encapsulating this logic somewhere else, perhaps by converting it to a ViewComponent.
There's many more things you can do to provide analytics for your application, but I'll leave you to check out the excellent Segment documentation if you want to do more.
Summary
In this post I showed how you can use a Segment's analytics.js library to add client-side analytics to your ASP.NET Core application. Adding analytics is as simple as including a JavaScript snippet and providing an API key. I also showed how you can associate page actions with users by reading
Claims from the
ClaimsPrincipal and calling
analytics.identify(). | https://andrewlock.net/adding-segment-client-side-analytics-to-an-asp-net-core-application/ | CC-MAIN-2019-47 | refinedweb | 1,559 | 56.05 |
If you finished the first part of this series, then you may already have a lot of ideas for your own Django applications. At some point, you might decide to extend them with user accounts. In this step-by-step tutorial, you’ll learn how to work with Django user management and add it to your program.
By the end of this tutorial, you’ll be able to:
- Create an application where users can register, log in, and reset and change passwords on their own
- Edit the default Django templates responsible for user management
- Send password reset emails to actual email addresses
- Authenticate using an external service
Let’s get started!
Free Bonus: Click here to get access to a free Django Learning Resources Guide (PDF) that shows you tips and tricks as well as common pitfalls to avoid when building Python + Django web applications.
Set Up a Django Project
This tutorial uses Django 3.0 and Python 3.6. It focuses on user management, so you won’t use any advanced or responsive styling. It also doesn’t deal with groups and permissions, only with creating and managing user accounts.
It’s a good idea to use a virtual environment when working with Python projects. That way, you can always be sure that the
python command points to the right version of Python and that the modules required by your project have correct versions. To read more about it, check out Python Virtual Environments: A Primer.
To set up a virtual environment on Linux and macOS, run these commands:
$ python3 -m venv venv $ source venv/bin/activate (venv) $ python -m pip install --upgrade pip (venv) $ python -m pip install django
To activate a virtual environment on Windows, run this command:
C:\> venv\Scripts\activate
Now that the environment is ready, you can create a new project and an application to store all your user management code:
(venv) $ django-admin startproject awesome_website (venv) $ cd awesome_website (venv) $ python manage.py startapp users
In this example, your application is called
users. Keep in mind that you need to install it by adding it to
INSTALLED_APPS:
# awesome_website/settings.py INSTALLED_APPS = [ "users", "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", ]
Next, apply the migrations and run the server:
(venv) $ python manage.py migrate (venv) $ python manage.py runserver
This will create all user-related models in the database and start your application at.
Note: In this tutorial, you’ll be using Django’s built-in user model. In practice, you would more likely create a custom user model, extending the functionality offered by Django. You can read more about customizing the default user model in Django’s documentation.
There’s one more thing you should do for this setup. By default, Django enforces strong passwords to make user accounts less prone to attacks. But you’re going to change passwords very often during the course of this tutorial, and figuring out a strong password each time would be very inconvenient.
You can solve this issue by disabling password validators in settings. Just comment them out, leaving an empty list:
# awesome_website/settings.py", # }, ]
Now Django will allow you to set passwords like
password or even
pass, making your work with the user management system much easier. Just remember to enable the validators in your actual application!
For this tutorial, it would also be useful to have access to the admin panel so you can track newly created users and their passwords. Go ahead and create an admin user:
(venv) $ python manage.py createsuperuser Username (leave blank to use 'pawel'): admin Email address: admin@example.com Password: Password (again): Superuser created successfully.
With the password validators disabled, you can use any password you like.
Create a Dashboard View
Most user management systems have some sort of main page, usually known as a dashboard. You’ll create a dashboard in this section, but because it won’t be the only page in your application, you’ll also create a base template to keep the looks of the website consistent.
You won’t use any of Django’s advanced template features, but if you need a refresher on the template syntax, then you might want to check out Django’s template documentation.
Note: All templates used in this tutorial should be placed in the
users/templates directory. If the tutorial mentions a template file
users/dashboard.html, then the actual file path is
users/templates/users/dashboard.html. For
base.html, the actual path is
users/templates/base.html, and so on.
The
users/templates directory doesn’t exist by default, so you’ll have to create it first. The structure of your project will look like this:
awesome_website/ │ ├── awesome_website/ │ ├── __init__.py │ ├── asgi.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py │ ├── users/ │ │ │ ├── migrations/ │ │ └── __init__.py │ │ │ ├── templates/ │ │ │ │ │ ├── registration/ ← Templates used by Django user management │ │ │ │ │ ├── users/ ← Other templates of your application │ │ │ │ │ └── base.html ← The base template of your application │ │ │ ├── __init__.py │ ├── admin.py │ ├── apps.py │ ├── models.py │ ├── tests.py │ └── views.py │ ├── db.sqlite3 └── manage.py
Create a base template called
base.html with the following content:
<!--users/templates/base.html--> <h1>Welcome to Awesome Website</h1> {% block content %} {% endblock %}
The base template doesn’t do much. It shows the message
Welcome to Awesome Website and defines a block called
content. The block is empty for now, and other templates are going to use it to include their own content.
Now you can create a template for the dashboard. It should be called
users/dashboard.html and should look like this:
<!--users/templates/users/dashboard.html--> {% extends 'base.html' %} {% block content %} Hello, {{ user.username|default:'Guest' }}! {% endblock %}
This doesn’t add a lot to the base template yet. It just shows the welcome message with the current user’s username. If the user isn’t logged in, then Django will still set the
user variable using an
AnonymousUser object. An anonymous user always has an empty username, so the dashboard will show
Hello, Guest!
For your template to work, you need to create a view that renders it and a URL that uses the view:
# users/views.py from django.shortcuts import render def dashboard(request): return render(request, "users/dashboard.html")
Now create a
users/urls.py file and add a path for the
dashboard view:
# users/urls.py from django.conf.urls import url from users.views import dashboard urlpatterns = [ url(r"^dashboard/", dashboard, name="dashboard"), ]
Don’t forget to add your application’s URLs to your project’s URLs:
# awesome_website/urls.py from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r"^", include("users.urls")), url(r"^admin/", admin.site.urls), ]
You can now test the dashboard view. Open in your browser. You should see a screen similar to this one:
Now open the admin panel at and log in as the admin user. Your dashboard should now look a bit different:
As you can see, your new template correctly displays the name of the currently logged-in user.
Work With Django User Management
A complete website needs a bit more than just a dashboard. Luckily, Django has a lot of user management–related resources that’ll take care of almost everything, including login, logout, password change, and password reset. Templates aren’t part of those resources, though. You have to create them on your own.
Start by adding the URLs provided by the Django authentication system into your application:
# users/urls.py from django.conf.urls import include, url from users.views import dashboard urlpatterns = [ url(r"^accounts/", include("django.contrib.auth.urls")), url(r"^dashboard/", dashboard, name="dashboard"), ]
That will give you access to all of the following URLs:
accounts/login/is used to log a user into your application. Refer to it by the name
accounts/logout/is used to log a user out of your application. Refer to it by the name
accounts/password_change/is used to change a password. Refer to it by the name
"password_change".
accounts/password_change/done/is used to show a confirmation that a password was changed. Refer to it by the name
"password_change_done".
accounts/password_reset/is used to request an email with a password reset link. Refer to it by the name
"password_reset".
accounts/password_reset/done/is used to show a confirmation that a password reset email was sent. Refer to it by the name
"password_reset_done".
accounts/reset/<uidb64>/<token>/is used to set a new password using a password reset link. Refer to it by the name
"password_reset_confirm".
accounts/reset/done/is used to show a confirmation that a password was reset. Refer to it by the name
"password_reset_complete".
This might seem a bit overwhelming, but don’t worry. In the next sections, you’ll learn what each of these URLs does and how to add them to your application.
Create a Login Page
For the login page, Django will try to use a template called
registration/login.html. Go ahead and create it:
<!--users/templates/registration/login.html--> {% extends 'base.html' %} {% block content %} <h2>Login</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <input type="submit" value="Login"> </form> <a href="{% url 'dashboard' %}">Back to dashboard</a> {% endblock %}
This will display a
Login heading, followed by a login form. Django uses a dictionary, also known as a context, to pass data to a template while rendering it. In this case, a variable called
form will already be included in the context—all you need to do is display it. Using
{{ form.as_p }} will render the form as a series of HTML paragraphs, making it look a bit nicer than just
{{ form }}.
The
{% csrf_token %} line inserts a cross-site request forgery (CSRF) token, which is required by every Django form. There’s also a button for submitting the form and, at the end of the template, a link that will take your users back to the dashboard.
You can further improve the looks of the form by adding a small CSS script to the base template:
<!--users/templates/base.html--> <style> label, input { display: block; } span.helptext { display: none; } </style> <h1>Welcome to Awesome Website</h1> {% block content %} {% endblock %}
By adding the above code to the base template, you’ll improve the looks of all of your forms, not just the one in the dashboard.
You can now open in your browser, and you should see something like this:
Use the credentials of your admin user and press Login. Don’t be alarmed if you see an error screen:
According to the error message, Django can’t find a path for
accounts/profile/, which is the default destination for your users after a successful login. Instead of creating a new view, it would make more sense to reuse the dashboard view here.
Luckily, Django makes it easy to change the default redirection. All you need to do is add one line at the end of the settings file:
# awesome_website/settings.py LOGIN_REDIRECT_URL = "dashboard"
Try to log in again. This time you should be redirected to the dashboard without any errors.
Create a Logout Page
Now your users can log in, but they should also be able to log out. This process is more straightforward because there’s no form—they just need to click a link. After that, Django will redirect your users to
accounts/logout and will try to use a template called
registration/logged_out.html.
However, just like before, you can change the redirection. For example, it would make sense to redirect your users back to the dashboard. To do so, you need to add one line at the end of the settings file:
# awesome_website/settings.py LOGOUT_REDIRECT_URL = "dashboard"
Now that both login and logout are working, it would be a good idea to add proper links to your dashboard:
<!--users/templates/users/dashboard.html--> {% extends 'base.html' %} {% block content %} Hello, {{ user.username|default:'Guest' }}! <div> {% if user.is_authenticated %} <a href="{% url 'logout' %}">Logout</a> {% else %} <a href="{% url 'login' %}">Login</a> {% endif %} </div> {% endblock %}
If a user is logged in, then
user.is_authenticated will return
True and the dashboard will show the
Logout link. If a user is not logged in, then
user variable will be set to
AnonymousUser, and
user.is_authenticated will return
False. In that case, the
Login link will be displayed.
The updated dashboard should look like this for nonauthenticated users:
If you log in, then you should see this screen:
Congratulations! You just completed the most important part of the user management system: logging users in and out of the application. But there are a couple more steps ahead of you.
Change Passwords
At some point, your users might want to change their passwords. Instead of making them ask the admin to do it for them, you can add a password change form to your application. Django needs two templates to make this work:
registration/password_change_form.htmlto display the password change form
registration/password_change_done.htmlto show a confirmation that the password was successfully changed
Start with
registration/password_change_form.html:
<!--users/templates/registration/password_change_form.html--> {% extends 'base.html' %} {% block content %} <h2>Change password</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <input type="submit" value="Change"> </form> <a href="{% url 'dashboard' %}">Back to dashboard</a> {% endblock %}
This template looks almost the same as the login template you created earlier. But this time, Django will put a password change form here, not a login form, so the browser will display it differently.
The other template you need to create is
registration/password_change_done.html:
<!--users/templates/registration/password_change_done.html--> {% extends 'base.html' %} {% block content %} <h2>Password changed</h2> <a href="{% url 'dashboard' %}">Back to dashboard</a> {% endblock %}
This will reassure your users that the password change was successful and let them go back to the dashboard.
The dashboard would be a perfect place to include a link to your newly created password change form. You just have to make sure it’s shown to logged in users only:
<!--users/templates/users/dashboard.html--> {% extends 'base.html' %} {% block content %} Hello, {{ user.username|default:'Guest' }}! <div> {% if user.is_authenticated %} <a href="{% url 'logout' %}">Logout</a> <a href="{% url 'password_change' %}">Change password</a> {% else %} <a href="{% url 'login' %}">Login</a> {% endif %} </div> {% endblock %}
If you follow the link in your browser, then you should see the following form:
Go ahead and test it. Change the password, log out, and log in again. You can also try to access the password change page without logging in by accessing the URL directly in your browser. Django is clever enough to detect that you should log in first and will automatically redirect you to the login page.
Send Password Reset Links
Mistakes happen to the best of us, and every now and then someone might forget a password. Your Django user management system should handle that situation, too. This functionality is a bit more complicated because, in order to deliver password reset links, your application has to send emails.
Don’t worry—you don’t have to configure your own email server. For this tutorial, you just need a local test server to confirm that emails are sent. Open your terminal and run this command:
(venv) $ python -m smtpd -n -c DebuggingServer localhost:1025
This will start a simple SMTP server at. It won’t send any emails to actual email addresses. Instead, it’ll show the content of the messages in the command line.
All you need to do now is let Django know that you are going to use it. Add these two lines at the end of the settings file:
# awesome_website/settings.py EMAIL_HOST = "localhost" EMAIL_PORT = 1025
Django needs two templates for sending password reset links:
registration/password_reset_form.htmlto display the form used to request a password reset email
registration/password_reset_done.htmlto show a confirmation that a password reset email was sent
They will be very similar to the password change templates you created earlier. Start with the form:
<!--users/templates/registration/password_reset_form.html--> {% extends 'base.html' %} {% block content %} <h2>Send password reset link</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <input type="submit" value="Reset"> </form> <a href="{% url 'dashboard' %}">Back to dashboard</a> {% endblock %}
Now add a confirmation template:
<!--users/templates/registration/password_reset_done.html--> {% extends 'base.html' %} {% block content %} <h2>Password reset done</h2> <a href="{% url 'login' %}">Back to login</a> {% endblock %}
It would also be a good idea to include a link to the password reset> {% endblock %}
Your newly created password reset form should look like this:
Type in your admin’s email address (
admin@example.com) and press Reset. You should see the following message in the terminal running the email server:
---------- MESSAGE FOLLOWS ---------- b'Content-Type: text/plain; charset="utf-8"' b'MIME-Version: 1.0' b'Content-Transfer-Encoding: 7bit' b'Subject: Password reset on localhost:8000' b'From: webmaster@localhost' b'To: admin@example.com' b'Date: Wed, 22 Apr 2020 20:32:39 -0000' b'Message-ID: <20200422203239.28625.15187@pawel-laptop>' b'X-Peer: 127.0.0.1' b'' b'' b"You're receiving this email because you requested a password reset for your user account at localhost:8000." b'' b'Please go to the following page and choose a new password:' b'' b'' b'' b"Your username, in case you've forgotten: admin" b'' b'Thanks for using our site!' b'' b'The localhost:8000 team' b'' ------------ END MESSAGE ------------
This is the content of an email that would be sent to your admin. It contains information about the application that sent it plus a password reset link.
Reset Passwords
Each password reset email sent by Django contains a link that can be used to reset the password. To handle that link correctly, Django needs two more templates:
registration/password_reset_confirm.htmlto display the actual password reset form
registration/password_reset_complete.htmlto show a confirmation that a password was reset
These will look very similar to the password change templates. Start with the form:
<!--users/templates/registration/password_reset_confirm.html--> {% extends 'base.html' %} {% block content %} <h2>Confirm password reset</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <input type="submit" value="Confirm"> </form> {% endblock %}
Just like before, Django will automatically provide a form, but this time it’ll be a password reset form. You also need to add a confirmation template:
<!--users/templates/registration/password_reset_complete.html--> {% extends 'base.html' %} {% block content %} <h2>Password reset complete</h2> <a href="{% url 'login' %}">Back to login</a> {% endblock %}
Now, if you follow the password reset link from one of the emails, then you should see a form like this in your browser:
You can now check if it works. Insert a new password into the form, click Confirm, log out, and log in using the new password.
Change Email Templates
You can change the default templates for Django emails just like any other user management–related templates:
registration/password_reset_email.htmldetermines the body of the email
registration/password_reset_subject.txtdetermines the subject of the email
Django provides a lot of variables in the email template context that you can use to compose your own messages:
<!--users/templates/registration/password_reset_email.html--> Someone asked for password reset for email {{ email }}. Follow the link below: {{ protocol}}://{{ domain }}{% url 'password_reset_confirm' uidb64=uid token=token %}
You can also change the subject to something shorter, like
Reset password. If you implement these changes and send a password reset email again, then you should see this message:
---------- MESSAGE FOLLOWS ---------- b'Content-Type: text/plain; charset="utf-8"' b'MIME-Version: 1.0' b'Content-Transfer-Encoding: 7bit' b'Subject: Reset password' b'From: webmaster@localhost' b'To: admin@example.com' b'Date: Wed, 22 Apr 2020 20:36:36 -0000' b'Message-ID: <20200422203636.28625.36970@pawel-laptop>' b'X-Peer: 127.0.0.1' b'' b'Someone asked for password reset for email admin@example.com.' b'Follow the link below:' b'' ------------ END MESSAGE ------------
As you can see, both the subject and the content of the email have changed.
Register New Users
Your application can now handle all URLs related to Django user management. But one feature isn’t working yet.
You may have noticed that there’s no option to create a new user. Unfortunately, Django doesn’t provide user registration out of the box. You can, however, add it on your own.
Django offers a lot of forms that you can use in your projects. One of them is
UserCreationForm. It contains all the necessary fields to create a new user. However, it doesn’t include an email field.
In many applications, this might not be a problem, but you’ve already implemented a password reset feature. Your users need to configure an email address or else they won’t be able to receive password reset emails.
To fix that, you need to add your own user creation form. Don’t worry—you can reuse almost the entire
UserCreationForm. You just need to add the
Create a new Python file called
users/forms.py and add a custom form there:
# users/forms.py from django.contrib.auth.forms import UserCreationForm class CustomUserCreationForm(UserCreationForm): class Meta(UserCreationForm.Meta): fields = UserCreationForm.Meta.fields + ("email",)
As you can see, your
CustomUserCreationForm extends Django’s
UserCreationForm. The inner class
Meta keeps additional information about the form and in this case extends
UserCreationForm.Meta, so almost everything from Django’s form will be reused.
The key difference is the
fields attribute, which determines the fields that’ll be included in the form. Your custom form will use all the fields from
UserCreationForm and will add the
Now that the form is ready, create a new view called
1# users/views.py 2 3from django.contrib.auth import login 4from django.shortcuts import redirect, render 5from django.urls import reverse 6from users.forms import CustomUserCreationForm 7 8def dashboard(request): 9 return render(request, "users/dashboard.html") 10 11def register(request): 12 if request.method == "GET": 13 return render( 14 request, "users/register.html", 15 {"form": CustomUserCreationForm} 16 ) 17 elif request.method == "POST": 18 form = CustomUserCreationForm(request.POST) 19 if form.is_valid(): 20 user = form.save() 21 login(request, user) 22 return redirect(reverse("dashboard"))
Here’s a breakdown of the
register() view:
Lines 12 to 16: If the view is displayed by a browser, then it will be accessed by a
GETmethod. In that case, a template called
users/register.htmlwill be rendered. The last argument of
.render()is a context, which in this case contains your custom user creation form.
Lines 17 to 18: If the form is submitted, then the view will be accessed by a
POSTmethod. In that case, Django will attempt to create a user. A new
CustomUserCreationFormis created using the values submitted to the form, which are contained in the
request.POSTobject.
Lines 19 to 22: If the form is valid, then a new user is created on line 20 using
form.save(). Then the user is logged in on line 21 using
login(). Finally, line 22 redirects the user to the dashboard.
The template itself should look like this:
<!--users/templates/users/register.html--> {% extends 'base.html' %} {% block content %} <h2>Register</h2> <form method="post"> {% csrf_token %} {{form}} <input type="submit" value="Register"> </form> <a href="{% url 'login' %}">Back to login</a> {% endblock %}
This is very similar to the previous templates. Just like before, it takes the form from the context and renders it. The only difference is that this time you had to add the form to the context on your own instead of letting Django do it.
Remember to add a URL for the registration view:
# users/urls.py from django.conf.urls import include, url from users.views import dashboard, register urlpatterns = [ url(r"^accounts/", include("django.contrib.auth.urls")), url(r"^dashboard/", dashboard, name="dashboard"), url(r"^register/", register, name="register"), ]
It’s also a good idea to add a link to the registration> <a href="{% url 'register' %}">Register</a> {% endblock %}
Your newly created form should look like this:
Please keep in mind that this is just an example of a registration form. In the real world, you would probably send emails with confirmation links after someone creates a user account, and you would also display proper error messages if someone tried to register an account that already exists.
Send Emails to the Outside World
At the moment, your application can send emails to the local SMTP server so you can read them in the command line. It would be far more useful to send emails to actual email addresses. One way to do this is by using Mailgun.
For this step, you’ll need a Mailgun account. The basic version is free and will let you send emails from a rather obscure domain, but it will work for the purpose of this tutorial.
After you create your account, go to the Dashboard page and scroll down until you reach “Sending domains.” There you will find your sandbox domain:
Click on the domain. You’ll be redirected to a page where you can select the way you want to send emails. Choose SMTP:
Scroll down until you reach the credentials for your account:
You should find the following values:
- SMTP hostname
- Port
- Username
- Default password
All you need to do is add these values to the settings file. Keep in mind that you should never include any credentials directly in your code. Instead, add them as environmental variables and read their values in Python.
On Linux and macOS, you can add an environmental variable in the terminal like this:
(venv) $ export EMAIL_HOST_USER=your_email_host_user
On Windows, you can run this command in Command Prompt:
C:\> set EMAIL_HOST_USER=your_email_host_user
Repeat the same process for
EMAIL_HOST_PASSWORD, and remember to export the variables in the same terminal window where you run the Django server. After both variables are added, update the settings:
# awesome_website/settings.py EMAIL_HOST = "smtp.mailgun.org" EMAIL_PORT = 587 EMAIL_HOST_USER = os.environ.get("EMAIL_HOST_USER") EMAIL_HOST_PASSWORD = os.environ.get("EMAIL_HOST_PASSWORD") EMAIL_USE_TLS = True
The values of
EMAIL_PORT should be the same for all sandbox domains, but you have to use your own username and password. There’s also one additional value called
True.
To check if this works, you have to create a new user with your own email address. Go to and log in as the admin user. Go to Users and click ADD USER. Select any username and password you like and click Save and continue editing. Then insert the same email address you used for your Mailbox account into the email address field and save the user.
After creating the user, navigate to. Enter your email address and press Send. The process of sending an email will take a bit longer than with the local server. After a few moments, the password reset email should arrive in your inbox. It may also be in your spam folder, so don’t forget to check there too.
The sandbox domain will only work with the email address that you used to create your Mailgun account. To send emails to other recipients, you’ll have to add a custom domain.
Log in With GitHub
Many modern websites offer an option to authenticate using social media accounts. One such example is Google login, but in this tutorial you’ll learn how to integrate with GitHub.
Luckily, there’s a very useful Python module that takes care of this task. It’s called
social-auth-app-django. This tutorial shows just the basic configuration of the module, but you can learn more about it from its documentation, especially the part dedicated to Django configuration.
Create a GitHub Application
To use GitHub authentication with Django user management, you first have to create an application. Log in to your GitHub account, go to GitHub’s page for registering a new OAuth application, and fill in the form:
The most important part is the Authorization callback URL. It has to point back to your application.
After you register the application, you’ll be redirected to a page with credentials:
Add the values of Client ID and Client Secret to settings the same way you added Mailgun email credentials:
# awesome_website/settings.py SOCIAL_AUTH_GITHUB_KEY = os.environ.get("SOCIAL_AUTH_GITHUB_KEY") SOCIAL_AUTH_GITHUB_SECRET = os.environ.get("SOCIAL_AUTH_GITHUB_SECRET")
You can now check if this works. Go to your application’s login page and select the option to log in with GitHub. Assuming you’ve logged out after creating the application in the previous step, you should be redirected to GitHub’s login page:
In the next step you’ll be asked to authorize the GitHub application:
If you confirm, then you’ll be redirected back to your application. You can now find a new user in the admin panel :
The newly created user has the same username as your GitHub handle and doesn’t have a password.
Select Authentication Backend
There’s one problem with the example above: by enabling GitHub login, you accidentally broke the normal user creation process.
That happened because Django previously had only one authentication backend to choose from, and now it has two. Django doesn’t know which one to use when creating new users, so you’ll have to help it decide. To do that, replace the line
user = form.save() in your registration view:
# users/views.py from django.contrib.auth import login from django.shortcuts import redirect, render from django.urls import reverse from users.forms import CustomUserCreationForm def dashboard(request): return render(request, "users/dashboard.html") def register(request): if request.method == "GET": return render( request, "users/register.html", {"form": CustomUserCreationForm} ) elif request.method == "POST": form = CustomUserCreationForm(request.POST) if form.is_valid(): user = form.save(commit=False) user.backend = "django.contrib.auth.backends.ModelBackend" user.save() login(request, user) return redirect(reverse("dashboard"))
A user is created from the form like before, but this time it’s not immediately saved thanks to
commit=False. In the next line, a backend is associated with the user, and only then is the user saved to the database. This way, you can use both normal user creation and social media authentication in the same Django user management system.
Conclusion
Django is a versatile framework, and it does its best to help you with every possible task, including user management. It provides a lot of ready-to-use resources, but sometimes you need to extend them just a bit.
In this tutorial, you’ve learned how to:
- Extend your Django application with a full set of user management features
- Adjust the default Django templates to better suit your needs
- Use Mailgun to send password reset emails
- Add an option to log in with GitHub
This should provide you with a good starting point for your own amazing ideas. If you think that something’s missing, or if you have any questions, then please don’t hesitate to let me know! | https://realpython.com/django-user-management/ | CC-MAIN-2021-25 | refinedweb | 5,124 | 57.67 |
I’ve got a question about building a gui in javafx.
When I build a program with many pages, I mean for example – The start page contains a couple of buttons “Add user”, “Drop user”, “Change user data” – each of these buttons draws an different panel. And here is my question – how to organise them. When I put all panels in one class it becomes quickly huge and generates many many bugs. It’s also hard to find a part of code which is broken. Should every panel be defined in a different class? But if so, there is a problem with returning from submenus to main menu.
Or another example. First panel of program is login page and the next page depends on the inserted login.
How should I organize a lot of panels? Any help would be appreciated
If your project is not too big, then i would suggest making a
Presenter
class, which would control the stage and the program flow and which shows up one of many
View
classes.
This is an example of a presenter class:
class Presenter { public void showA(Stage mainStage){ ViewA a = new ViewA(); a.setOnBackButton(new ViewCallback(){ public void call(){ showB(); } }); mainStage.setScene(new Scene(a)); } public void showB(Stage mainStage){ ViewB b = new ViewB(); b.setOnBackButton(new ViewCallback(){ public void call(){ showA(); } }); mainStage.setScene(new Scene(b)); } }
This is example of a
view
body:
public class ViewA { private ViewCallBack onBackButton = null; public void setOnBackButton(ViewCallback callback){ onBackButton = callback; } public void callBack() { if (onBackButton != null) onBackButton.call(); } ... // somewhere in your code Button b = new Button("shoot me!"); b.setOnAction(new EventHandler(){ public void handle(ActionEvent event){ callBack(); } }); }
This is the
ViewCallback
interface
public Interface ViewCallback { public void call(); }
You can use this simple callback interface or
Callback
JavFX generic callback interface
. | http://colabug.com/2113307.html | CC-MAIN-2018-05 | refinedweb | 300 | 55.64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.