text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Relay Interfacing with PIC Microcontroller
In this project we will interface a Relay with PIC Microcontroller PIC16F877A. Relay is a mechanical device to control high voltage, high current appliances ‘ON’ or ‘OFF’ from lower voltage levels. Relay provides isolation between two voltage levels and it is generally use to control AC appliances. From mechanical to Solid state relays, there are various type of relays are available in electronics. In this project we will use mechanical relay.
In this project we will do the following things-
- We will Interface a switch for input from user.
- Control an 220V AC bulb with 5V relay.
- To control the relay we will use BC547 NPN transistor and the transistor will be controlled from the PIC16F877A. A led will notify the relay ON or OFF condition.
Component Required:
- PIC16F877A
- 20Mhz Crystal
- 2 pcs 33pF ceramic
- 3 pcs 4.7k resistors
- 1k resistor
- 1 LED
- BC547 Transistor
- 1N4007 Diode
- 5V cubic relay
- AC bulb
- Breadboard
- Wires for connecting the parts.
- 5V Adapter or any 5V power source with at least 200mA current capabilities.
Relay and its Working:
Relay works same as typical switch. Mechanical relays use temporary magnet made from electromagnetic coil. When we provide enough current across this coil, it became energized and pulls an arm. Due to that the circuit connected across the relay can be closed or open. The Input and Output don’t have any electrical connections and thus it isolates input and output. Learn more about relay and its constructions here.
Relays can be found in different voltage ranges like 5V, 6V, 12V, 18V etc. In this project we will use 5V relay as our working voltage is 5 Volts here. This 5V cubic relay is capable to switch 7A load at 240VAC or 10A load at 110VAC. However instead that huge load, we will use a 220VAC bulb and switch it using the relay.
This is the 5V Relay we are using in this project. The current rating is clearly specified for two voltage levels, 10A at 120VAC and 7A at 240VAC. We need to connect load across the relay less than the specified rating.
This relay has 5 pins. If we see the pinout we can see-
The L1 and L2 is the internal electromagnetic coil’s pin. We need to control these two pins for turning the relay ‘ON’ or ‘OFF’. Next three pins are POLE, NO and NC. The pole is connected with the internal metal plate which changes its connection when the relay turns on. In normal condition, POLE is shorted with NC. NC stands for normally connected. When the relay turns on, the pole changes its position and become connected with the NO. NO stands for NormallyOpen.
In our circuit, we have made the relay connection with transistor and diode. Relay with transistor and diode is available in market as Relay Module, so when you use Relay Module you don’t need to connect its driver circuit (Transistor and diode).
Relay is used in all the Home Automation Projects to control the AC Home Appliances.
Circuit Diagram:
Complete circuit for connecting Relay with PIC Microcontroller is given below:
In the above schematic pic16F877A is used, where on the port B the LED and Transistor is connected, which is further controlled using the TAC switch at RBO. The R1 provide bias current to the transistor. R2 is a pull-down resistor, used across tactile switch. It will provide logic 0 when the switch is not pressed. The 1N4007 is a clamp diode, used for the relay’s electromagnetic coil. When the relay will turned off, there are chances for high voltage spikes and the diode will suppress it. The transistor is required for driving the relay as it requires more than 50mA of current, that the microcontroller is unable to provide. We can also use ULN2003 instead the transistor, it is a wiser choice if more than two or three relays are required for the application, check the Relay module circuit. The LEDacross port RB2 will notify “relay is on”.
The final circuit will look like this-
Code Explanation:
At the beginning of the main.c file, we added the configuration lines for pic16F877A and also defined the pin names across PORTB.
As always first, we need to set the configuration bits in the pic microcontroller, define some macros, including libraries and crystal frequency. You can check code for all those in the complete code given at the end. We made RB0 as input. In this pin the switch is connected.
#include <xc.h> /* Hardware related definition */ #define _XTAL_FREQ 200000000 //Crystal Frequency, used in delay #define SW PORTBbits.RB0 #define RELAY PORTBbits.RB1 #define LED PORTBbits.RB2
After that, we called system_init() function where we initialized pin direction, and also configured the default state of the pins.
In the system_init() function we will see
void system_init(void){ TRISBbits.TRISB0 = 1; // Setting Sw as input TRISBbits.TRISB1 = 0; // setting LED as output TRISBbits.TRISB2 = 0; // setting relay pin as output LED = 0; RELAY = 0; }
In the main function we constantly check the switch press, if we detect the switch press by sensing logic high across RB0; we wait for some time and see whether the switch is still pressed or not, if the switch is still pressed then we will invert the RELAY and LED pin’s state.
void main(void) { system_init(); // System getting ready while(1){ if(SW == 1){ //switch is pressed __delay_ms(50); // debounce delay if (SW == 1){ // switch is still pressed LED = !LED; // inverting the pin status. RELAY = !RELAY; } } } return; }
Complete code and Demo Video for this Relay interfacing is given below.
/*
* File: main.c
* Author: Sourav Gupta
* By:- circuitdigest.com
* Created on May 30,>
/*
Hardware related definition
*/
#define _XTAL_FREQ 200000000 //Crystal Frequency, used in delay
#define SW PORTBbits.RB0
#define RELAY PORTBbits.RB1
#define LED PORTBbits.RB2
/*
Other Specific definition
*/
void system_init(void);
void main(void) {
system_init(); // System getting ready
while(1){
if(SW == 1){ //switch is pressed
__delay_ms(50); // debounce delay
if (SW == 1){ // switch is still pressed
LED = !LED; // inverting the pin status.
RELAY = !RELAY;
}
}
}
return;
}
/*
This Function is for system initialisations.
*/
void system_init(void){
TRISBbits.TRISB0 = 1; // Setting Sw as input
TRISBbits.TRISB1 = 0; // setting LED as output
TRISBbits.TRISB2 = 0; // setting relay pin as output
LED = 0;
RELAY = 0;
}
Read More Detail:Relay Interfacing with PIC Microcontroller
JLCPCB – Prototype 10 PCBs for $2 (For Any Color)
China’s Largest PCB Prototype Enterprise, 600,000+ Customers & 10,000+ Online Orders Daily
See Why JLCPCB Is So Popular: | https://pic-microcontroller.com/relay-interfacing-with-pic-microcontroller/ | CC-MAIN-2019-18 | refinedweb | 1,087 | 65.52 |
Access Control Lists (ACLs)¶
Normally to create, read and modify containers and objects, you must have the appropriate roles on the project associated with the account, i.e., you must be the owner of the account. However, an owner can grant access to other users by using an Access Control List (ACL).
There are two types of ACLs:
- Container ACLs. These are specified on a container and apply to that container only and the objects in the container.
- Account ACLs. These are specified at the account level and apply to all containers and objects in the account.
Container ACLs¶
Container ACLs are stored in the
X-Container-Write and
X-Container-Read
metadata. The scope of the ACL is limited to the container where the
metadata is set and the objects in the container. In addition:
X-Container-Writegrants the ability to perform PUT, POST and DELETE operations on objects within a container. It does not grant the ability to perform POST or DELETE operations on the container itself. Some ACL elements also grant the ability to perform HEAD or GET operations on the container.
X-Container-Readgrants the ability to perform GET and HEAD operations on objects within a container. Some of the ACL elements also grant the ability to perform HEAD or GET operations on the container itself. However, a container ACL does not allow access to privileged metadata (such as
X-Container-Sync-Key).
Container ACLs use the “V1” ACL syntax which is a comma separated string of elements as shown in the following example:
.r:*,.rlistings,7ec59e87c6584c348b563254aae4c221:*
Spaces may occur between elements as shown in the following example:
.r : *, .rlistings, 7ec59e87c6584c348b563254aae4c221:*
However, these spaces are removed from the value stored in the
X-Container-Write and
X-Container-Read metadata. In addition,
the
.r: string can be written as
.referrer:, but is stored as
.r:.
While all auth systems use the same syntax, the meaning of some elements is different because of the different concepts used by different auth systems as explained in the following sections:
Common ACL Elements¶
The following table describes elements of an ACL that are
supported by both Keystone auth and TempAuth. These elements
should only be used with
X-Container-Read (with the exception
of
.rlistings, an error will occur if used with
X-Container-Write):
Keystone Auth ACL Elements¶
The following table describes elements of an ACL that are supported only by Keystone auth. Keystone auth also supports the elements described in Common ACL Elements.
A token must be included in the request for any of these ACL elements to take effect.
Note
Keystone project (tenant) or user names (i.e.,
<project-name>:<user-name) must no longer be
used because with the introduction
of domains in Keystone, names are not globally unique. You should
use user and project ids instead.
For backwards compatibility, ACLs using names will be granted by
keystoneauth when it can be established that
the grantee project, the grantee user and the project being
accessed are either not yet in a domain (e.g. the
X-Auth-Token has
been obtained via the Keystone V2 API) or are all in the default domain
to which legacy accounts would have been migrated.
TempAuth ACL Elements¶
The following table describes elements of an ACL that are supported only by TempAuth. TempAuth auth also supports the elements described in Common ACL Elements.
Container ACL Examples¶
Container ACLs may be set by including
X-Container-Write and/or
X-Container-Read headers with a PUT or a POST request to the container URL.
The following examples use the
swift command line client which support
these headers being set via its
--write-acl and
--read-acl options.
Example: Public Container¶
The following allows anybody to list objects in the
www container and
download objects. The users do not need to include a token in
their request. This ACL is commonly referred to as making the
container “public”. It is useful when used with StaticWeb:
swift post www --read-acl ".r:*,.rlistings"
Example: Sharing a Container with Project Members¶
The following allows any member of the
77b8f82565f14814bece56e50c4c240f
project to upload and download objects or to list the contents
of the
www container. A token scoped to the
77b8f82565f14814bece56e50c4c240f
project must be included in the request:
swift post www --read-acl "77b8f82565f14814bece56e50c4c240f:*" \ --write-acl "77b8f82565f14814bece56e50c4c240f:*"
Example: Allowing a Referrer Domain to Download Objects¶
The following allows any request from
the
example.com domain to access an object in the container:
swift post www --read-acl ".r:.example.com"
However, the request from the user must contain the appropriate Referer header as shown in this example request:
curl -i $publicURL/www/document --head -H "Referer:"
Note
The Referer header is included in requests by many browsers. However, since it is easy to create a request with any desired value in the Referer header, the referrer ACL has very weak security.
Account ACLs¶
Note
Account ACLs are not currently supported by Keystone auth
The
X-Account-Access-Control header is used to specify
account-level ACLs in a format specific to the auth system.
These headers are visible and settable only by account owners (those for whom
swift_owner is true).
Behavior of account ACLs is auth-system-dependent. In the case of TempAuth,
if an authenticated user has membership in a group which is listed in the
ACL, then the user is allowed the access level of that ACL.
Account ACLs use the “V2” ACL syntax, which is a JSON dictionary with keys
named “admin”, “read-write”, and “read-only”. (Note the case sensitivity.)
An example value for the
X-Account-Access-Control header looks like this,
where
a,
b and
c are user names:
{"admin":["a","b"],"read-only":["c"]}
Keys may be absent (as shown in above example).
The recommended way to generate ACL strings is as follows:
from swift.common.middleware.acl import format_acl acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] } acl_string = format_acl(version=2, acl_dict=acl_data)
Using the
format_acl() method will ensure
that JSON is encoded as ASCII (using e.g. ‘u1234’ for Unicode). While
it’s permissible to manually send
curl commands containing
X-Account-Access-Control headers, you should exercise caution when
doing so, due to the potential for human error.
Within the JSON dictionary stored in
X-Account-Access-Control, the keys
have the following meanings:
For more details, see
swift.common.middleware.tempauth. For details
on the ACL format, see
swift.common.middleware.acl. | http://docs.openstack.org/developer/swift/overview_acl.html | CC-MAIN-2017-04 | refinedweb | 1,077 | 53.41 |
IntroductionPlaceHolder enables pages to add custom content to the
<head> section. (Of course, these two ContentPlaceHolders can be modified or removed, and additional ContentPlaceHolder may be added to the master page. Our master page,
Site.master, currently has four ContentPlaceHolder controls.)
The HTML
<head> element serves as a repository for information about the web page document that is not part of the document itself. This includes information such as the web page's title, meta-information used by search engines or internal crawlers, and links to external resources, such as RSS feeds, JavaScript, and CSS files. Some of this information may be pertinent to all pages in the website. For example, you might want to globally import the same CSS rules and JavaScript files for every ASP.NET page. However, there are portions of the
<head> element that are page-specific. The page title is a prime example.
In this tutorial we examine how to define global and page-specific
<head> section markup in the master page and in its content pages.
Examining the Master Page's
<head> Section
The default master page file created by Visual Studio 2008 contains the following markup in its
<head> section:
<head runat="server"> <title>Untitled Page</title> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> </head>
Notice that the
<head> element contains a
runat="server" attribute, which indicates that it is a server control (rather than static HTML). All ASP.NET pages derive from the
Page class, which is located in the
System.Web.UI namespace. This class contains a
Header property that provides access to the page's
<head> region. Using the
Header property we can set an ASP.NET page's title or add additional markup to the rendered
<head> section. It is possible, then, to customize a content page's
<head> element by writing a bit of code in the page's
Page_Load event handler. We examine how to programmatically set the page's title in Step 1.
The markup shown in the
<head> element above also includes a ContentPlaceHolder control named
head. This ContentPlaceHolder control is not necessary, as content pages can add custom content to the
page's current
<head> markup is shown below.
<head runat="server"> <title>Untitled Page</title> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> <link href="Styles.css" rel="stylesheet" type="text/css" /> </head>
Step 1: Setting a Content Page's Title
The web page's title is specified via the
<title> element. It is important to set each page's title to an appropriate value. When visiting a page, its title is displayed in the browser's Title bar. Additionally, when bookmarking a page, browsers use the page's title as the suggested name for the bookmark. Also, many search engines show the page's title when displaying search results.
Note: By default, Visual Studio sets the
<title> element in the master page to "Untitled Page". Similarly, new ASP.NET pages have their
<title> set to "Untitled Page", too. Because it can be easy to forget to set the page's title to an appropriate value, there are many pages on the Internet with the title "Untitled Page". Searching Google for web pages with this title returns roughly 2,460,000 results. Even Microsoft is susceptible to publishing web pages with the title "Untitled Page". At the time of this writing, a Google search reported 236 such web pages in the Microsoft.com domain. of the
<%@ Page %> directive. This property can be set by directly modifying the
<%@ Page %> directive or through the Properties window. Let's look at both approaches.
From the Source view, locate the
<%@ Page %> directive, which is at the top of the page's declarative markup. The
<%@ Page %> directive for
Default.aspx follows:
<%@ Page Language="VB" MasterPageFile="~/Site.master" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" Title="Untitled Page" %>
The
<%@ Page %> directive specifies page-specific attributes used by the ASP.NET engine when parsing and compiling the page. This includes its master page file, the location of its code file, and its title, among other information.
By default, when creating a new content page Visual Studio sets the
Title whose value is reflected in the rendered
<title> element. This property is accessible from an ASP.NET page's code-behind class via
Page.Header.Title; this same property can also be accessed via
Page.Title.
To practice setting the page's title programmatically, navigate to the
About.aspx Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Page.Title = String.Format("Master Page Tutorials :: About :: {0:d}", DateTime.Now) End Sub
Figure 3 shows the browser's title bar when visiting the
About.aspx page.
Figure 03: The Page's Title is Programmatically Set and Includes the Current Date
Step 2: Automatically Assigning a Page Title
As we saw in Step 1, a page's title can be set declaratively or programmatically. If you forget to explicitly change the title to something more descriptive, however, your page will have the default title, "Untitled Page". Ideally, the page's title would be set automatically for us in the event that we don't explicitly specify its value. For example, if at runtime the page's title is "Untitled Page", we might want to have the title automatically updated to be the same as the ASP.NET page's filename. The good news is that with a little bit of upfront work it is possible to have the title automatically assigned.
All ASP.NET web pages derive from the
Page class in the System.Web.UI namespace. The
Page class defines the minimal functionality needed by an ASP.NET page and exposes essential properties like
IsPostBack,
IsValid,
Request, and
Response, among many others. Oftentimes, every page in a web application requires additional features or functionality. A common way of providing this is to create a custom base page class. A custom base page class is a class you create that derives from the
Page class and includes additional functionality. Once this base class has been created, you can have your ASP.NET pages derive from it (rather than the
Page class), thereby offering the extended functionality to your ASP.NET pages.
In this step we create a base page that automatically sets the page's title to the ASP.NET page's filename if the title has not otherwise been explicitly set. Step 3 looks at setting the page's title based on the site map.
Note: A thorough examination of creating and using custom base page classes is beyond the scope of this tutorial series. For more information, read Using a Custom Base Class for Your ASP.NET Pages' Code-Behind Classes.
Creating the Base.vb. Figure 4 shows the Solution Explorer after the
App_Code folder and
BasePage.vb.vb Inherits System.Web.UI.Page End Class Overrides Sub OnLoadComplete(ByVal e As EventArgs) ' Set the page's title, if necessary If String.IsNullOrEmpty(Page.Title) OrElse Page.Title = "Untitled Page" Then ' Determine the filename for this page Dim fileName As String = System.IO.Path.GetFileNameWithoutExtension(Request.PhysicalPath) Page.Title = fileName End If MyBase.OnLoadComplete(e) End Sub
The
OnLoadComplete method starts by determining if the
Title property has not yet been explicitly set. If the
Title property is
Nothing, an empty string, or has the value "Untitled Page", it is assigned to the filename of the requested ASP.NET page. The physical path to the requested ASP.NET page -
C:\MySites\Tutorial03\Login.aspx, for example - is accessible via the
Request.PhysicalPath property. The
Path.GetFileNameWithoutExtension method is used to pull out just the filename portion, and this filename is then assigned to the
Page.Title property.
Note: I invite you to enhance this logic to improve the format of the title. For example, if the page's filename is
Company-Products.aspx, the above code will produce the title "Company-Products", but ideally the dash would be replaced with a space, as in "Company Products". Also, consider adding a space whenever there's a case change. That is, consider adding code that transforms the filename
OurBusinessHours.aspx:
Partial Class About Inherits System.Web.UI.Page ... End Class
To:
Partial Class About Inherits BasePage ... End Class when viewed through a browser. Note that the title is precisely the page's filename (less the extension), "MultipleContentPlaceHolders".
Figure 05: If a Title is Not Explicitly Specified, the Page's Filename is Automatically Used (Click to view full-size image)
Step 3: Basing the Page Title on the Site Map
ASP.NET offers a robust site map framework that enables page developers to define a hierarchical site map in an external resource (such as an XML file or database table) along with Web controls for displaying information about the site map (such as the SiteMapPath, Menu, and TreeView controls).
The site map structure can also be accessed programmatically from an ASP.NET page's code-behind class. In this manner we can automatically set a page's title to the title of its corresponding node in the site map. Let's enhance the
BasePage class created in Step 2 so that it offers this functionality. But first we need to create a site map for our site.
Note: This tutorial assumes the reader already is familiar with ASP.NET's site map features. For more information on using the site map, consult my multi-part article series, Examining ASP.NET's Site Navigation.
Creating the Site an XML file as its site map store. Let's use this provider for defining our site map.
Start by creating a site map file in the website's root folder named
Web.sitemap. To accomplish this, right-click on the website name in Solution Explorer, choose Add New Item, and select the Site Map template. Ensure that the file is named
Web.sitemap that we have a site map defined, let's update the master page to include navigation Web controls. Specifically, let's add a ListView control to the left column in the Lessons section that renders an unordered list with a list item for each node defined in the site map.
Note: The ListView control is new to ASP.NET version 3.5. If you are using a prior version of ASP.NET, use the Repeater control instead. For more information on the ListView control, see Using ASP.NET 3.5's ListView and DataPager Controls.
Start by removing the existing unordered list markup from the Lessons section. Next, drag a ListView control from the Toolbox and drop it beneath the Lessons heading. The ListView is located in the Data section of the Toolbox, alongside the other view controls: the GridView, DetailsView, and FormView. Set the ListView's
ID property to
LessonsList.
From the Data Source Configuration Wizard choose to bind the ListView to a new SiteMapDataSource control named
LessonsDataSource. The SiteMapDataSource control returns the hierarchical structure from the site map system.
Figure 08: Bind a SiteMapDataSource Control to the LessonsList each item returned by the SiteMapDataSource as a list item (
<li>) that contains a link to the particular lesson.
After configuring the ListView's templates, visit the website. As Figure 9 shows, the Lessons section contains a single bulleted item, Home. Where are the About and Using Multiple ContentPlaceHolder Controls lessons? The SiteMapDataSource is designed to return a hierarchical set of data, but the ListView control can only display a single level of the hierarchy. Consequently, only the first level of site map nodes returned by the SiteMapDataSource is displayed.
Figure 09: The Lessons Section Contains a Single List Item (Click to view full-size image)
To display multiple levels we could nest multiple ListViews within the
ItemTemplate. This technique was examined in the Master Pages and Site Navigation tutorial of my Working with Data tutorial series. However, for this tutorial series our site map will contain just a two levels: Home (the top level); and each lesson as a child of Home. Rather than crafting a nested ListView, we can instead instruct the SiteMapDataSource to not return the starting node by setting its
ShowStartingNode property to
False. The net effect is that the SiteMapDataSource starts by returning the second tier of site map nodes.
With this change, the ListView displays bullet items for the About and Using Multiple ContentPlaceHolder Controls lessons, but omits a bullet item for Home. To remedy this, we can explicitly add a bullet item for Home in the
LayoutTemplate:
<LayoutTemplate> to use the title specified in the site map. As we did in Step 2, we only want to use the site map node's title if the page's title has not been explicitly set by the page developer. If the page being requested does not have an explicitly set page title and is not found in the site map, then we'll fall back to using the requested page's filename (less the extension), as we did in Step 2. Figure 11 illustrates this decision process.
Figure 11: In the Absence of an Explicitly Set Page Title, the Corresponding Site Map Node's Title is Used
Update the
BasePage class's
OnLoadComplete method to include the following code:
Protected Overrides Sub OnLoadComplete(ByVal e As EventArgs) ' Set the page's title, if necessary If String.IsNullOrEmpty(Page.Title) OrElse Page.Title = "Untitled Page" Then ' Is this page defined in the site map? Dim newTitle As String = Nothing Dim current As SiteMapNode = SiteMap.CurrentNode If current IsNot Nothing Then newTitle = current.Title Else ' Determine the filename for this page newTitle = System.IO.Path.GetFileNameWithoutExtension(Request.PhysicalPath) End If Page.Title = newTitle End If MyBase.OnLoadComplete(e) End Sub
As before, the
OnLoadComplete method starts by determining whether the page's title has been explicitly set. If
Page.Title is
Nothing,
Nothing?
The easiest way to add page-specific content to the
<head> section is by creating a ContentPlaceHolder control in the master page. We already have such a ContentPlaceHolder (named
head). Therefore, to add custom
<head> markup, create a corresponding Content control in the page and place the markup there.
To illustrate adding custom
<head> markup to a page, let's include a
<meta> description element to our current set of content pages. The
<meta> description element provides a brief description about the web page; most search engines incorporate this information in some form when displaying search results.
A
<meta> description element has the following form:
<meta name="description" content="description of the web page" />
To add this markup to a content page, add the above text to the Content control that maps to the master page's
head ContentPlaceHolder. For example, to define a
<meta> description element for
Default.aspx, add the following markup:
<asp:Content a browser. After the page has been loaded, view the source and note that the
<head> section includes the markup specified in the Content control.
Take a moment to add
<meta> description elements to
About.aspx,
MultipleContentPlaceHolders.aspx, and
Login.aspx.
Programmatically Adding Markup to the
">).
Being able to programmatically add content to the
<head> region is useful when the content to add is dynamic. Perhaps it's based on the user visiting the page; maybe it's being pulled from a database. Regardless of the reason, you can add content to the
HtmlHead by adding controls to its
Controls collection like so:
' Programmatically add a <meta> element to the Header Dim keywords As Banerjee. Interested in reviewing my upcoming MSDN articles? If so, drop me a line at mitchell@4GuysFromRolla.com. | http://www.asp.net/web-forms/tutorials/master-pages/specifying-the-title-meta-tags-and-other-html-headers-in-the-master-page-vb | CC-MAIN-2014-15 | refinedweb | 2,581 | 56.96 |
Request For Commits – Episode #1
Open Source, Then and Now (Part 1)
with Karl Fogel, author of Producing Open Source Software
Nadia Eghbal and Mikeal Rogers kick off Season 1 of Request For Commits with a two part conversation with Karl Fogel — a software developer who has been active in open source since its inception.
Karl served on the board of the Open Source Initiative, which coined the term “open source”, and helped write Subversion, a popular version control system that predates Git. Karl also wrote a popular book on managing open source projects called Producing Open Source Software. He’s currently a partner at Open Tech Strategies, a firm that helps major organizations use open source to achieve their goals.
- Read Karl Fogel’s book — Producing Open Source Software
- Listen to part 2 with Karl Fogel as we continue this conversation.
Transcript
I’m Nadia Eghbal.
And I’m Mikeal Rogers.
On today’s show, Mikeal and I talked with Karl Fogel, author of Producing Open Source Software. Karl served on the board of the Open Source Initiative, which coined the term ‘open source’ and helped write Subversion. He’s currently a partner at Open Tech Strategies, helping major organizations use open source to achieve their goals.
Our focus on today’s episode with Karl was about what has changed in open source since he first published his book ten years ago. We talked about the influence of Git and GitHub, and how they’ve changed both development workflows and our culture.
We also talked about changes in the wider perception of open source, whether open source has truly won, and the challenges that still remain.
So back in 2006 I started working at the Open Source Applications foundation on the Chandler Project, and I remember we had to kind of put together a governance policy and how do we manage an open source project, how do we do it openly, and basically your book kind of got slapped on everybody’s desk. [laughter] The Producing Open Source Software first edition, and it was like “This is how you run open source projects.”
Wow, that’s really nice to hear, thank you.
And it was… Especially at that time it was an amazing guide, and I know from talking with Jacob Kaplan-Moss that the Django project did something similar, as well. I’m very curious how you got to write that book and what preceded it. It’s produced by O’Reilly, right?
Yes.
I’m curious why O’Reilly wanted to do something… It’s very deep and very nerdy, so…
Yeah, actually I wanna take a quick second to give a shout out to O’Reilly because… I mean, that was never a book that was gonna be a bestseller, and they sort of knew that from the beginning, and they not only decided to produce it anyway, they gave me a very good editor, Andy Oram, who made a lot of contributions to the book in terms of shaping it and giving good feedback. And they let me publish it under a free license, which to a publisher, that’s a pretty big move, and it’s not something that they do with all their books. So I really appreciated the support I got from them.
So the answer to your main question there I’m afraid is pure luck. I really think that in the early 2000s, 2005-2006 the time was ripe for some kind of long-form guide to the social and community management aspects of open source to come out, and my book just happened to come out. If someone else had written a long-form guide, then… You know, it’s like in the early days of physics - if you just happen to be the first person to think of calculus, you’ll get all this credit; but there were probably ten people who thought of it, it’s just that someone published this first.
So yeah, I just got really lucky with the timing. And the way that I was motivated to write it, that O’Reilly had contacted me about doing a Subversion book… I was coming off five or six years as a founding developer in the Subversion project and it had been my full-time job, and I’d gone from being mostly a programmer and sort of project technical - not necessarily technical lead, but technical arbiter or technical cheerleader in some sense, to more and more community manager. I mean, I was still doing coding, but a lot of my time was spent on just organizing and coordinating the work of others and interjecting what I felt were the appropriate noises in certain contentious discussion threads and things like that.
[00:04:09.07] So when it came time to write a Subversion book, I had already written a book, I knew folks at O’Reilly, and they said “Would you like to be one of the authors?” There were a couple other Subversion developers that I worked with who were also interested in writing, and we had all agreed that we would co-author it.
Then as I started to write, I really let down my co-authors. I said, “Hey, folks, I’m really sorry. I don’t wanna write another technical manual. I’ve already done that once. You folks go do it, it’s gonna be great.” And I wrote the introduction and they wrote a wonderful book that became one of O’Reilly’s better sellers and is still quite popular.
So I thought, “Well, what was it that I wanted to write if that wasn’t the book?” and I realized the book I wanted to write was not about Subversion the software, it was about the running of a Subversion project, and about open source projects in general - Subversion wasn’t the only one that I was involved in. So I went back to O’Reilly and I said very meekly, “Could I write this other book instead? What do you think of that?” and they said yes. So I sort of backed into it… I was forced into the realization that I wanted to write this book through trying to write another book and failing.
Was that a popular view back then? Like, when you said that you wanted to write this non-technical, more management-focused book around open source, were people like “Why?”
Let me cast back my memory… No, but then again, the people that I talked with - that’s a very biased sample, right? Most people were encouraging, and if they were mystified as to why I wanted to write this, they hid it very well and were nothing but encouraging. Then it took a little bit longer to write than I thought, and people were asking “How’s it going?” and I’d always give the same answer, like “How’s your book going?” “Never ask. Thank you.” [laughter] No one ever listened to that, they would just ask the next time. But eventually it got done.
I think there was, among people involved in open source. For example, the role of community manager was already a title you started to see people having. You started to see a phenomenon where the coordinating people, the people doing that community management and projects were no longer also the most technically sharp people. I was definitely not the best programmer in the Subversion project; I could think of a lot of names - I’ve probably even forgotten some names of people who I just think are better coders than I am, who were working on that project.
And that was true across a lot of open source projects. I could see that the people who were doing technical and community work together were not the Linus Torvalds model - and Linus Torvalds isn’t by any means a typical example… The Linux kernel in general is not a typical example of how open source projects have ever operated. It’s been its kind of own weird, unique thing for a long time. But one thing you can say about it is that the leader of the project is also one of the best programmers in the project. Linus is a very technically sharp person. But that was not the case in a lot of open source projects, and that to me seemed like a clue that, “Okay, something’s happening here where open source is maturing to the point where different sets of skills are needed”, and you’ve got these crossover people who are often founding members of a project and active in coding and other technical tasks, but their main focus, their main energy is going to the social health and the overall community health of the project.
[00:07:45.19] I wasn’t the only person sensing that. A lot of people seemed to already understand the topic of the book before I explained it to them.
For that first book, I mean, you came up through the ’90s open source scene and were clearly doing a lot of community work on the Subversion project - did you write it mostly just from your own experiences and memory, or did you go through a phase of research and reaching out to other projects?
That’s a really good question. Yeah, I researched other projects. I did rely a lot of my own experiences, which were somewhat broad; I had worked on a lot of projects by that point. But I was worried that I would be biased, and particularly towards backend systems projects, because I was a C programmer, I didn’t do a huge amount of graphical user interface programming or stuff like that. Web programming was kind of new then, but I still hadn’t done a lot of it. So I deliberately sought out some other projects to talk to and people were very generous with their time. I think I listed them all, either in the acknowledgments or in the running text or the footnote. So not all those were projects that I worked in, they were just places where people were willing to be informants.
Interesting. You mentioned that people were starting to come around and you were starting to see community manager as a title, but I do feel like the book addressed something and reset people’s expectations about how open source projects run. It did bring a lot of this community stuff and not everything being purely technical to the forefront. If there was one presumption that projects had at the time, that the book was meant to address - is there one that you can point at? Or any kind of general stories that you might have heard about shifts in people’s… What I really wanna get at is, people’s conception of open source had been this pure meritocracy, pure technical side of things, right? Not a lot had been done in a formal way to address the role of people and people management, and processes and barriers to entry until your book, as far as I know.
I think I get the question you’re asking, and it’s a good one. I’ve never really thought of the book as addressing a sort of as yet unacknowledged need, but I guess in a way it was. The observation I had at the time in Subversion, and then as I started to talk to people in other projects I realized it was just as true for them as it was for subversion, was that there’s no such thing as a structureless meritocracy, and there’s not such thing as a structureless community. We’re all heard of the famous essay The Tyranny Of Structurelessness, in which the author points out that if you think you have a structureless organization, what you really have is an organization where the rules are not clear and people with certain kind of personalities end up dominating by sometimes vicious or deceptive means. And that has certainly been the case in some open source projects. I don’t wanna name names, but we could probably all think of some.
What I saw on Subversion was that managing a bunch of people who were not all under one management hierarchy, like they were coming from different companies, and some of them were true volunteers in the sense that there was no way in which they were being paid for their time, or only very indirectly, but a lot of them were being paid for it and they had their own priorities, and to make that scene work and to have the project make identifiable progress, you had to broker compromise; you had to convince people like, “Okay, this feature that you want needs some more design work, and the community has to accept it. That means it’s not gonna be done in time for this upcoming release, but we don’t wanna delay the release because there’s another thing that this programmer or this company wants needs to be in that release and they’re depending on it. And by the way, if you get them on your side by cooperating now, he’ll be much more likely to review your changes and design when your stuff is ready.” Things like that. Making sure that the right people meet or talk at in-person events. Occasional backchannel communications to ask someone to be a little bit less harsh toward a new developer who is showing promise or is perhaps representing an important organization that is going to bring more development energy to the project, but we need to not offend their first person who comes in, who is maybe not leading with his best code; it sometimes happens.
[00:12:17.09] There were all sorts of stuff that had to be done that was not necessarily visible from just watching the public mailing list. So the book was basically - I realize I’m giving a long answer, you should feel free to edit this down, by the way… Now I’m trying to be a little less verbose…
No, this is perfect.
Okay, I’m glad. [laughs] I guess the thing the book was meant to address was you get a lot of programmers who land in open source somehow, they find themselves running projects or occupying positions of influence, and both because no one has ever said it, and because it’s not visible from the public activity on the project - or not entirely visible - and because there is a predisposition among programmers to be less aware of social things; statistically speaking, programmers I think are somewhat less socially adept people than most people. Obviously, there are exceptions to that, but I think it’s a broad categorization that is statically true. So for all of those reasons, I wanted there to be a document that said, “Hey, you need to start thinking about this as a system. You need to start thinking about the project in the same way you think about the codebase. Parts need to work together, and you need to pay attention to how people feel and to their long-term interest, and you’ve gotta put yourself in their shoes. Here’s a rough guide to doing that.”
That’s what I was thinking when I was writing the book, and I never really articulated that until you asked the question, but I’m pretty sure that’s more or less what I was thinking.
Yeah, I mean, we’re still struggling with that today. [laughs] We’re talking in the past tense because the book came out ten years ago, but I’m still struggling to get people to recognize that today…
Well, let’s go right to the controversial stuff. The Linux kernel project is famous for kind of having a toxic atmosphere, right? And Linus has basically said that he equates the thing that most of us call toxicity with meritocracy. In other words, the kinds of people who write the kinds of code that he wants to end up in a Linux kernel are the kinds of people who flourish in the atmosphere he has set up.
Maybe that’s actually true, but I just don’t think the Linux kernel project has run the experiment of trying to… Forking the project and running a nice version, where everyone is welcomed warmly and not insulted personally by a charismatic leader, in which they can see whether that theory is actually true.
Right. I was actually not even thinking about projects that are more than ten years old, but even projects that start today struggle with this. Just acknowledging that soft skills matter and that somebody needs to pick up this community work.
I think it’s interesting that you said that you wrote the book in 2005, around this time when you felt like people were starting to notice and care about the need for skills beyond coding, but I feel like that’s almost what people would say about right now too, so I wonder if anything’s even changed in ten years or not.
Well, just imagine how much worse things would be if we hadn’t all been through that. [laughter]
Yeah.
You never have an alternate universe in which to run experiment, unfortunately. But I think it will always be true, because the startup costs in open source are so low - although that’s changing a little bit, and we can talk about that later - so that the people who start projects, they’ll just land in it coming from a technical interest. They’re not starting out by thinking of soft skills, so the projects are always launched in a way that’s sort of biased towards a certain kind of culture, and then they end up having to correct toward a more socially functioning culture, even though that imposes a small amount of overhead on the technical running of the project.
[00:16:17.13] And if it’s a useful project and people are like “Well, I’m gonna use it”… Or even if it’s not useful, but it’s just kind of a legacy being used, it’s like, what incentive is there really? I think it’s still very hard to tie together, and in some cases you can tie together the health of a project with its popularity, but sometimes it’s a popular project and it’s just not that kind of place.
Yeah… I can only make anecdotal studies there. One example is the LibreOffice project - it has really gone through a great deal of trouble to be welcoming to developers and to make their initial development process easier. Building a project is now way, way easier than it used to be; they’ve just really sunk a lot of time into making it easy to compile it from source, to welcome new developers. I think that’s having a good effect, but how do you know how popular or how successful the project would be without that? You just don’t.
You mentioned that you’ve released it under Creative Commons license, and I saw that you’ve actually kind of kept it a little bit up to date and you’ve kind of pushed small changes to it over time, but in 2013 you decided to actually do a full new edition of the book. What precipitated the need for an entire new edition, rather than just adjustments?
A few things. One, the adjustments that I had been doing in the years from 2006 roughly to 2013, they weren’t that trivial. I mean, there were a lot of small scale changes that went in. I think most sections of the book got touched, some of them pretty heavily, but I was never thinking of it as a full rewrite. And then it was really partly my own feeling about certain things that were out of that and partly feedback I was getting from other people. One thing everyone noticed - and I noticed too, because I also use Git for my coding work, although I use Subversion for non-coding version control - was that all the examples used Subversion, which was totally the right thing to do in 2005, because that was the thing that you stored your open source code in, but it just wasn’t by 2013; Git was the obvious answer, and frankly even though the site itself is not open source, GitHub was clearly the thing to use. For example, most active open source code is just on GitHub, and if the book doesn’t acknowledge that fact, then it’s just not reflecting reality and it’s not giving people the smoothest entry into open source that they can have.
So one obvious thing was the revamping of all the examples to use Git instead of Subversion and to talk about GitHub. And also in general, the project hosting situation had changed. I’m sorry, I just don’t consider SourceForge a thing anymore. [laughter] So many ads, too much visual noise, not compelling enough functionality, and that’s despite the fact that the SourceForge platform itself finally went open source, as the Allure project - which is great, I’d love to be using it, but I’m afraid I just have a much better experience with GitHub and Git, so that’s what I use.
So the recommendations about how to host projects really needed to change to be oriented more around the Git-based universe and to at least acknowledge and recommend GitHub, while acknowledging that it itself is not open source… Although I hope that they see a grand strategic vision whereby opening up their actual platform makes sense some day; I think that the real secret sauce there is the dev ops, it’s not the code, so I hope they do that someday.
[00:19:53.06] The other thing that changed kind of in a big way was what I think of as the slow rise of business-to-business open source, which is… The old cliché was “Open source always starts when some individual programmer needs to scratch an itch”; she needs to analyze her log files better, so she writes a log analyzer and then she realizes that there are other sysadmins who need to do the same, so they start collaborating, and now you’ve got an open source log analyzer, and she’s the de facto leader of this new open source project. Well, that did happen a lot, but now you have things like Android. You have businesses putting out open source codebases like TensorFlow. I don’t mean to pick on Google examples only, it’s just that those are the first things that come to mind, but Facebook also does this, Hewlett Packard does it… Lots of companies are releasing open source projects which are - I guess you could call them it’s a corporation scratching a corporation’s itch, but it is not a case of an individual developer; it’s a management move, it is done for strategic reasons which they can articulate to themselves and sometimes they also articulate to the world.
And I thought that the rise of that kind kind of project needed to be covered better, and that that was a trend that if the book could explain it better to other managers in tech or tech-related companies, that perhaps it would encourage some of them to join that trend.
And sorry, I’m realizing that there’s one more component to the answer - the other thing that changed was that I expected governments to be doing more open source by 2013 than they were, and I had at that point been very active in trying to help some government agencies launch technical products as open source, because they were gonna need that technology anyway. It’s taxpayer-funded, why not make it open source? And they were just really culturally not suited to it. There were just many, many things about the way governments do technology development, in the way they do procurement, in the way they tend to be risk-averse to the exclusion of many other considerations, really made open source an uphill struggle for them, and I wanted the book to talk a lot more about that, because I wanted it to be something that government decision-makers could circulate and use as a way to reassure themselves that they could do open source and that it could be successful, and that they didn’t have to look at it as a risky move.
So there were some new trends that I wanted to cover and there were some new goals that I had for the book, and they just required ground-up reorganization and revamp.
Wow, that’s great. We’re gonna take a short break and when we come back Karl’s gonna get into how GitHub has changed the open source landscape.
We’re back with Karl Fogel. Karl, in your mind, what have Git and GitHub changed about open source today? What are the biggest shifts that happened from the Subversion Apache days to now?
[00:23:44.29] Well, so I might have to ask you for help answering this, because I wonder if I was so comfortable with old tools that maybe I was blind to something that was difficult about them. I didn’t feel like GitHub changed the culture tremendously except in the sense that Twitter changed the culture of the internet, which is to say it gave everyone an agreed-on namespace. Right now Twitter is essentially the “Hey, if you have an internet username, it’s whatever your username is on Twitter. That’s your handle now.” And in open source your GitHub handle, which for many people is the same as their Twitter handle, that’s like your chief identifier. And it’s not a completely unified namespace and there are plenty of projects that host in other places and many developers contribute to projects that are hosted in places other than GitHub… But it is sort of a unified namespace.
If you have an open source project and you don’t have the project name somewhere on GitHub, someone else is sure to take it for their fork, right? So you’ve gotta get that real estate even if you’re not hosting there.
But I think the way GitHub wants to think about it is that they made it a lot easier for people to track sources, to make lightweight quick, so-called ‘drive-by contributions’ and to maintain what used to be called vendor branches, that is to say permanent non-hostile forks; internal, or sort of feature forks that are maintained in parallel with the project, where the upstream isn’t going to ever take the patches, but they’d have otherwise no particular animosity toward the changes, or are even willing to make some adjustments so that the people who need to maintain that separate branch can do so.
So I think their goal was to make all that stuff easier, and also to make gazillions of dollars, which I’m happy to see they’re doing. And I think that it is part of GitHub’s self-identity - for the executive and upper management team, it’s part of their self-identity to think of themselves as supporting open source, that they are doing good for open source. And as I said, I always remember that the platform itself is not open source, but that aside, I think in many ways it’s true, they do a lot of things to support open source.
The moves that they made to give technical support and kind of a little nudge to projects to get real open source licenses in their repositories was a really helpful thing. Nowadays most active open source projects on GitHub do have a license file, and that’s partly because GitHub made a push to help that happen, and they’ve done a lot to support getting open source into government agencies and things like that. So I think they had sort of cultural motivations, as well as technical and financial motivations.
So has it changed the culture of open source? That’s the thing, I’m not really sure it was all that hard to contribute to an open source project before GitHub. Maybe that’s because my specialty was working on one of the tools that is the main part of the contribution workflow, with the version control tools; I worked on CVS, which was the main open source version control system in the network on Subversion, which was for a while the main open source version control system. So if I wanted to make a drive-by contribution to some other project, of course I never had any problem doing it, because the version control tool is probably something I hacked on; it was just no trouble. But maybe you could tell me, was it actually harder?
Well, there’s a couple things you are glancing over. Just a couple. And I suffer from the same problem, where you’ll jump through hoops without realizing that they’re hoops, because you’re just used to doing this kind of stuff… But the Twitter analogy works really well; so yes, there’s a shared namespace - and before that, people had email addresses, so it’s not like we’d lacked identity, but it did sort of unify those, so you know where to find anybody by a particular name, where to find a project by a particular name. But another thing that Twitter does too is it has a set of norms around how you communicate and how you do things with DMs and add replies and stuff like, right?
[00:27:59.29] That’s a really good point, yeah.
Source control is certainly part of the contribution experience, but if GitHub was just Git, it wouldn’t be the hub… It wouldn’t be GitHub, right? There’s an extension of the language and the tools around collaboration that they also unified. In Subversion I can create a diff, but how I send that diff to you and how you communicate that it may or may not go in or out, how we might communicate about that review process, that is not a unified experience across projects in older open source the way that it is in GitHub, right?
That’s true, and that’s a really good point. I mean, it was never hard to find out. Usually you mail the diff to the mailing list and people review it there, right? But you had to find out the address, you had to go read the project’s contribution documentation, and maybe that didn’t exist or was not easy to find… And you’re right, on GitHub it’s ‘Submit a pull request’. You know what to do - fork the repository, make your branch, make your change, turn it into a pull request against the upstream, and now it’s being tracked just like an issue, and by the way, the issue tracking is also integrated, so now you don’t have to go searching for the project’s issue tracker.
Yeah, I mean that workflow itself may not be more discoverable than sending a diff to a mailing list, but once you do it, it’s the same everywhere. I think that’s the bigger shift.
No, in fact I think it’s less discoverable, in the sense that the actual… I mean, I’ve trained a lot of people in using Git; I go to a wonderful organization… In fact, I’m gonna do a shout out for them, ChiHackNight.org, the Chicago Hack Night, on Tuesday nights here. There are a lot of newcomers there who haven’t used Git or GitHub before, or they’ve heard of it and tried it out. So I’ve had to walk people through this process of creating a PR, making their own fork or repository, and people get so confused, like “Wait, I’m forking the repository… But what’s a branch? What’s a repository? Where does the PR live?” It’s conceptually actually not easy at all, but once they know it, they know it for every project on GitHub. And I think your point is very good, it’s not that it’s easier, it’s just that you only have to learn it once now.
I think there’s also something to be said for the friendliness of GitHub, even just visually, right? Twitter is again maybe a great analogy for that… It’s just prettier. People feel more comfortable on a more consumer-facing website than navigating around the corners of the internet.
Yeah, and that’s one thing that Subversion never had - a default visual web browser interface. There were several of them and your project had to pick, so the one you picked might be different from what some other project picked. With GitHub it’s like… There are a lot of people who think of Git as GitHub. They think that that web interface that you see on GitHub, that is part of Git. Obviously, in some technical sense that’s not correct, but in a larger sense, as far as their experience and their actual workflow is concerned, that’s a pretty accurate way of looking at it.
Yeah. I think also - and this is one that is really easy to glance over if you have any experience, but because we’re in this new, publish-first mindset, newer people will publish stuff and put it up there, and they’ll actually get contributions. And it actually takes a much broader skillset to take contributions than it takes to push them to other projects, especially in traditional tooling, and GitHub also makes that incredibly easy. Their diff view is quite nice. They have the image different…
Yeah, it really is.
… and all of these other features, right? So if you’re somebody that doesn’t know Git very well and you just got your project up, getting a contribution and then having to pull it down locally and look the diff, it’s actually like a whole big extension of that collaboration toolchain, and they make that so easy for first-time publishers that are now dealing with contributions coming in. It makes that workflow for them really easy and it also just allows them to enjoy the process of getting contributions from people.
[00:32:00.07] Yeah, you’re right. I’ve never thought about that, but the process of becoming an open source maintainer is a lot easier on GitHub, and it’s so satisfying when you click that Merge Pull Request button and it just goes in. All you did was you clicked the green button and you’ve accepted a contribution from a perfect stranger on the internet. It’s so empowering, right? And that was not an easy process for new maintainers. In the old system you’d manually apply a diff and then commit it, and you’d have to write their name by hand in the log message, or something.
I think we’re also skipping over this entire generation of tools like Trac and JIRA, that in a lot of ways were much harder to use than sending a diff to a mailing list. [laughs]
Well yeah, I don’t know, because I got so used to them. I don’t think that they were a discrete generation; I think that they were a continuum of tools that as soon as the web came around, people started making bug trackers that… The original bug trackers were worked by email submission. You would communicate with them by sending them email and getting responses back, and actually a lot of projects ran on that. Then people started making websites that would track bugs and you could just interact with the website directly, and then that was integrated with Wiki functionality, Wiki was invented, and it just took a while for interfaces to sort out the conventions that actually worked. In a lot of ways, GitHub is the beneficiary of all the mistakes that everyone made before GitHub was created. If GitHub had been invented in the year 2000, they would have made all those same series of mistakes themselves, but instead they could just look back and see what everyone else did and not make those mistakes. No libel on them, of course, that’s what they should do, but that’s why it worked out so well for them.
It’s like MySpace and Facebook, or any sort of second adopter.
Yeah, exactly.
Well, I do think there’s another element of this though, which is that those tools - and JIRA in particular is very good at this… It’s developed for maintainers and for teams with a big project and a big process. So it is customizable to a project’s process. That means that’s great for that individual project if it exists alone by itself, but in an open source ecosystem where everytime I go to a JIRA there’s a different workflow, that’s incredibly daunting for individuals out there.
GitHub, because they were thinking about Git in the scale of people and contributions and forks and repos - you kind of take for granted that no, you can’t have super customized workflows at the repository.
Yeah… One of the things I kind of admire about GitHub’s management team is… I mean, if you look, GitHub has its own bug tracker. They have an open source code, but you can file bugs against GitHub itself, and that tracker is public. If you look through there, there are like thousands of these feature requests and modifications that people want, that for each person requesting, this change would suit their needs, it would really make life easier for their project, and basically GitHub employees spend their lives saying no. You just look in those threads and they are polite and they explain why, but they have to turn down most of those requests because they have to really think about the big picture and keep GitHub simple for the majority of open source projects, and they do a really good job at that.
[00:35:44.27] One of the things that I hope is happening, and I assume it is and I would like to look into it more is that GitLab and other open sourcers - in GitLab’s case there is an open source edition and also a proprietary edition - should be using GitHub as kind of like their free of charge research lab. All the things that are being requested in GitHub and all the decisions GitHub is making, and all the innovations that GitHub has to not experiment with because of their scale and all the existing customer base that they can’t afford to tick off - that is a real opportunity for these other platforms to say, “Hey, GitHub made the wrong call there. We’re gonna do that and try it out,” because they have less to lose right now and a lot to gain, and I think that there could be a very productive interplay between the two, that is in the long run good for open source. We’ll just have to see. But the fact that GitHub is making all these decisions in public is very useful, I think.
Yeah, I agree. So when you first got involved in open source in the ‘90s, it was sort of a counter-culture movement, and of all the things that you could say about open source today, I don’t think that you could say that it was a counter-culture movement.
Well, it’s funny… I think open source no longer thinks of itself as a counter-culture movement, especially in the United States. Well, actually let me back up a bit. So the term open source, at least for this usage of it, was coined in ’97, I think.
Right, right.
And open source was going on for many years prior to that. I had run an open source company and had been a full-time open source developer long before the term was coined, and people just used the term ‘free software’ and got confused, because there was just widespread confusion about whether that meant free as in there’s no charge. AOL used to ship CDs to everyone’s doorstep and that software was free, but it wasn’t free in the sense of free software, in the sense of freedom. So there was a lot of terminological confusion.
One of the things that I think is downplayed today, or there’s a little bit of historical amnesia about is the degree to which the coining of the term ‘open source’ was not simply an attempt to separate a development methodology from the ideological drives of the free software foundation Richard Stallman, but was also just an attempt to resolve a real terminology problem that a lot of people - and especially people who ran open source businesses - were having, which was “What term do we use that won’t confuse our customers and the people who use our software?”
Cygnus Solutions, which later got bought by Red Hat, tried to go with the term ‘sourceware’ for a while. That was an interesting coinage, and in fact my company, Cyclic Software, which I was running with Jim Blandy at the time, we actually contacted them to see about using that term, and we got a non-committal response where it wasn’t quite clear if they were trying to trademark it or they intended for only Cygnus to use it.
That’s even weirder.
So that didn’t fly, right? That wasn’t gonna work… If only Cygnus can use it, that’s not gonna be the term that kicks over.
That defeats the purpose, yeah.
Anyway, it didn’t have a good adjectival form, so it wasn’t… On its own merits, it had problems anyway. Eventually, when the term ‘open source’ came out, I just felt this tremendous relief. I was like, “Okay, no term is perfect. This term has some possible confusions and problems as well, but it is way easier for explanatory purposes than free software has been, so I’m just gonna start using it.” And I didn’t intend any ideological switch by that. I was still very pro free software, I ran only free software on my boxes, I only developed free software… But I just thought, “Okay, here’s a term that also means freedom that will confuse people less.”
[00:39:48.17] And then roughly a year after that coinage, when Stallman and the FSF (Free Software Foundation) realized that a lot of the people who were driving the term open source, who had founded the term - not necessarily people who were using the term, which was a lot of us - were also not on board with the ideology. Did they start to make this distinction between free software and open source, and say “Just because you support one doesn’t mean you support the other. They’re not the same thing, even though it’s the exact same set of licenses and software… So what do we mean by ‘not the same thing’?”
So that ideological split is kind of a post-facto creation. It was not actually something that was going on to the degree that it was later alleged to be going on.
And in your book, I’m trying to remember - it’s called Producing Open Source Software, but isn’t the subtitle also How To Run A Free Software Project?
Yeah, the book is a total diplomatic ‘split the difference’.
Yeah, you really went right down the middle there.
…How To Run A Successful Free Software Project. [laughs]
Yeah… You didn’t commit to either one.
Well, I didn’t want to, because to me it’s the same - like if there were two words for the vegetable broccoli, I might use both words, but it’s the same vegetable. Open source to me is one things; I can call it ‘free software’, I can call it ‘broccoli’, I can call it ‘open source’, it is still the same thing. People have all sorts of different motivations for doing it. Someone’s motivation for participating in a project or launching a project are not part of the project’s license, and therefore they’re not part of the term for me.
That’s a good transition into our next section. We’re gonna take a short break and when we come back we’ll talk about the mainstream version of open source.
[00:43:57.03] We’re back with Karl Fogel. Karl, today a lot of people are saying that open source is basically one, in the sense that a lot of companies are using it, a lot of people are roaring around the term ‘open source’ who might not have traditionally been engaged with open source… Do you think that open source has won, or they’re just sort of like different battles to be fought? Is that helpful vocabulary?
It has absolutely not won. I do not know why people think that. Where do you walk into a store and buy a mobile phone that’s running a truly open source operating system? I mean yeah, Android Core is open source, or is derived from the Android open source project. I guess when people say it’s won, what they mean is that if you think of software as a sphere where it’s constantly expanding - or as Marc Andreessen said “eating the world” - the surface of that sphere is mostly proprietary.
The ratio of the volume to the surface is constantly increasing, and most of that volume is open source, so people who are exposed to the backend of software and who are aware of what’s going on behind the scene in tech say, “Oh look, open source is winning” or “Open source has won” because so much of the volume inside the sphere is open source. But most of the world only has contact with the surface, and most of that surface is proprietary, and that surface is the only link that they’re going to have with any kind of meaningful software freedom, or lack of software freedom; their ability to customize, their ability to learn from the devices that they use… Their ability - I mean, it’s not the case that every person should be a programmer, but perhaps they should have the ability to hire someone else or bring something to a third party service that specializes in customization and get something fixed or made to behave in a different way. And for most of the surface of that sphere it’s completely impenetrable and opaque and you just can’t do that stuff; you have to accept what is handed to you. So no, I don’t think open source has won in the meaningful ways.
I think there’s a really important distinction there between software as infrastructure and software on the consumer-facing side. The research I’ve been doing and where I’m interested is almost exclusively on infrastructure, and I noticed there is this difference on maybe the ideals of free software to begin with, or around being able to change the Xerox printer, that was the Richard Stallman thing.
Right, that’s the legendary story, which I think is true, of Stallman trying to fix a printer and not having source code to the printer driver.
Right. And so I wonder, is that frustrating for them…? In some ways it really won on maybe the infrastructure side, and it’s almost even - I keep saying “won”, or just been massively adopted almost because it’s equivalent of free labour, like price-free stuff that startups can use, and so has the needle moved at all on the principle side of things? Or does it even matter?
Well, I have a very utilitarian view of the principle side of it; I do think that software freedom is important, but it’s increasingly an issue of control over your personal life and your families and friend’s lives, or at least being able not to put them in harm’s way. A great example is Karen Sandler, the executive director of the Software Freedom Conservancy, she has a heart device; she has a congenital heart condition, she has a device attached to her heart, and that device is running proprietary software. That software - I don’t know the exact version running on her device, but that type of software has been shown to be extremely vulnerable to hacking, to remote control.
[00:48:03.02] In fact Dick Cheney, the Vice-President had a similar device in his heart and apparently had the wireless features on the device disabled for security reasons. Think about the fact that the Federal Agency in the U.S. that is responsible for approving medical devices not only does not review software source code, it does not even require that the source code be placed in escrow with the Agency in case an investigation is later necessary. It just evaluates the entire system as a black box and says, “Yes, approved” or “No, not approved”, and they have nowhere near the resources or the confidence, let alone the mandate to review the software for vulnerabilities, when software vulnerabilities are increasingly affecting everyone. Everyone’s had a credit card account that’s been hacked in some way.
I wonder if those battles are gonna be addressed maybe not through software freedom or open source or those types of movements, but I guess as you’re describing it, I’m thinking more around hacker/maker movements and hardware stuff, or they might come at it from the same angle, saying “Why can’t I just modify anything?”
Yeah, and you do see a lot of that. I saw a keynote at the O’Reilly OSCon, the Open Source Convention, you probably saw it, too… The woman who had hacked her own insulin pump; the software that controls a device that dispenses a chemical into her bloodstream turned out to be hackable, so they hacked it.
So I think you’re right, the maker movement is driving it, and they share a lot of language and people with the open source movement. I just used the open source movement unironically; to me it’s largely the same as the free software movement.
So yeah, there are various pressures toward people having the ability to customize or to invite other people to help them customize the devices that run increasingly large swaths of our lives. I guess what’s happened is open source kept winning individual battles, but the number of things that software took a controlling role in kept increasing so rapidly that the percentage of things that are open source on the surface has been going down, even as open source keeps winning area after area.
I think that if you separated it nicely into two camps, if you look at the production of software versus the consumption of software, the reason we keep talking about “open source is winning” is because it really has won or very close to winning the production of software. If you were a developer in the early ‘90s, most if not all of your toolchain was proprietary. The way that you developed software was to use other proprietary software; that’s completely turned on its head.
Yeah, that was probably true, although it didn’t have to be.
It didn’t have to be at the time, but now the predominant way that you develop any software, including proprietary software, is to use a bunch of open source software.
Right, that’s a really good point. I think you’re right.
I mean, that proprietary code that’s on that hard device is probably compiled with JCC. [laughs]
Or one of the other free compilers.
Or LLVM, yeah. And so because the voices in our world are so dominated by the people that have actually produced the software, there is this mindset that “Hey, I live in this world all day that it’s 99% open source.” It feels like it has won. And I think the reason that it won though - in that space, and not in the consumer space - is that there is a utilitarian reason that you need something open source. It is infinitely more useful if it’s open source, and more useable as a producer if it’s open source. And there’s all these network effects that make it better over time that I can evaluate as a producer.
[00:52:12.24] But if you’re looking at products and the consumption of software, it being open source or not is not visible to the consumer of that software, at least not immediately. So there needs to be some kind of utilitarian argument around that, and I think it may be privacy and security. That’s a very, very good argument and it’s getting more tangible to consumers now.
Yeah, I think that’s at least part of it, and that has been a winning argument. A lot of the open source privacy and security projects have seen a lot more adoption and a lot more funding; just for various reasons, many of those projects tend to be non-profit, or at least not plausibly for-profit. It’s very clear that for all of his eloquence as a writer and speaker, which I think is considerable, the reason Richard Stallman succeeded was Emacs and GCC. He wrote or caused people to coalesce and help him write two really great programs, and then motivated a lot of people to write a lot of the pieces of the rest of a Unix-like system; didn’t unfortunately get the kernel, Linus Torvalds got that, and that has caused some bad blood ever since. But it was writing good code, that people could actually use, that gave him influence.
That’s why they took his other writings seriously, it was the utility of the code. But I think going back to the way you started presenting that idea, I think one of the important goals, one of the important motivating factors in the free software movement was keeping blurry the distinction between producers and consumers; the idea that there should not be a firm wall between these two camps, and that anyone who just thinks of themselves as only using software… I sort of prefer ‘user’ to ‘consumer’ because when you use software, you don’t - it’s not like apples, where once you use it, it’s consumed. [laughter] The software is still there after you run it, so it’s not being consumed. But the idea that any user has the potential, by very incremental degrees, to be invited into the production of the software… In fact, that’s what happened to me, that’s how I got into it. I was just using this stuff for writing my papers in college and exploring the nascent internet, and someone showed me how to use the info tree.
That was like the documentation tree for documentation that covered all of the GNU free software utilities, and right at the top of the introductory node, the top-level node in the info documentation browser was a paragraph that said “You can contribute to this documentation. To learn how to add more material to this info tree, click here”, where ‘click’ meant navigating with the keyboard and hit return; I don’t think there was a mouse. There was no mouse on those terminals, they were VT-100 terminals, but the idea that the system was inviting me to participate in the creation of more of the system - that struck me as really interesting.
[00:55:38.29] The idea was to keep the surface porous and allow for the possibility that those users who have the potential to become participants in improving the system do so. It wasn’t just freedom as an abstract concept, it was freedom as a practice. And still today, I think the way a lot of people get at open source is that they learn that they can affect the way a web page behaves by going behind the scenes and editing the JavaScript that got downloaded to their browser and noticing that things change; then they realize that “Hey, this is not a read-only system. The whole things is read/write. I can make things happen.” That’s what worries me about a lot of the user-facing devices and interfaces that we see today - there’s no doorway, there’s no porousness to the surface; you have no opportunity to customize or hack on it, or get in touch with the people who are one level closer to the source code.
I think there are a couple of interesting things that might be happening in tandem around that now. We haven’t talked about this at all, but just the definition of a software developer has changed radically in the past five years, where a lot more people are learning how to code. Maybe they’re not at a very high technical level, but just enough that they are able to modify small things around them and see that power. I think learning how to code has just become so much more accessible, so you have so many people that are interested in modifying the world around them in much more casual ways. That is blurring the line between consumer and producer. Look at any child today, everybody is learning how to code, and just imagine when they grow up and they just expect that everything around them can be transformed. It’s almost like people are coming at it from a different direction, but then at the same time you see all these very proprietary platforms that are basically exploiting network effects to centralize where people congregate on the internet, and those things are still total black boxes.
I don’t know what happens when the youngest generation now grows up… Will they say, “This is bullshit!”? “This is not how we were raised to see the internet.”
They’ll say “This is bullshit”, but they’ll say it on Facebook.
Right! And that’s the hard part, is sort of like you have this tyranny of … yeah.
I think that point about network effects is really important. What happened as an increasingly large percentage of humanity got internet connections was that the payoff ratio for building a proprietary system changed. It used to be that if you were building a system there was some reward for making it a little hackable, because the users you were likely to attract… Well, people on the internet at that time were already more likely to have potential to contribute to your system, so there was statistically some potential reward for making your system have a slightly open door to people coming in and helping out. But if you’re launching something like Facebook or Snapchat in the age of most of humanity being online, then the trouble you go through to make that thing hackable versus the payoff when most of those users are not going to take advantage of that, the reward matrix just looks different now, and maybe it just doesn’t make economic sense for those proprietary platforms to have a porous surface.
And oddly you see, like on Snapchat for example, where people are… Snapchat offers tons of things to make people essentially modify around them, like stickers, drawing on things or whatever. So it’s that same behavior, but it’s still on Snapchat’s platform.
Right, and they control it and they track… Like, you can’t fork Snapchat and make your stickers in the forked Snapchat, let alone do something else.
[00:59:41.22] The uncharitable way to say it is that everyone’s creative and environmental improvement impulses are being coopted and redirected into limited and controlled actions that do not threaten the platform providers. Basically every platform provider’s business model is “I wanna be like a phone carrier. I just wanna have total control over the user base and have people have to join in order to get access to the rest of my user base”, and that creates a mentality that is antithetical to the way open source works. You don’t fork a monopoly-based thing. You don’t fork a thing that has network effects.
I have a hard time thinking that that is necessarily… That these things have to be in conflict. I don’t think that users are ever gonna… I don’t think that you can sell a product to users in a competitive market based on the values that will attract a community around people hacking it. You have to be a great product compared to everybody else on the terms that most users are using it, but that doesn’t necessarily mean that you can’t also be hackable. You just have to have a culture around the product of actually creating something good.
Look at the one success story that we have, for a short period of time, which was Mozilla. They won for a while and took a huge amount of market share away from Microsoft - enough that Microsoft actually came and participated at Web Standards again - because they made a better browser for users, and not just for people that were hacking on websites.
And it’s because it’s better, not necessarily because of those…
Oh no, my doom and gloom is not a moral condemnation, it’s an observation of economic reality. I think what you’re saying is correct, but it’s still not good news for open source.
No, and I think that’s what’s so interesting about right now in even how people are using the term open source, and a lot of people say something is open source when it’s not actually. So the term itself has been sort of being coopted into different definitions, and for a lot of people now that are just coming into it, they say the term ‘open source’ and they just mean “Why shouldn’t I share what I made with the world?” or “Why shouldn’t I change something that I see?”, but it doesn’t necessarily carry all that other history or expectations with it.
Well yeah, that coopting has been going on. Ever since the term was coined, there have been groups and people using it in ways that don’t mean what it originally meant. There have been people coopting the term since it was coined, but there’s always been counter pressure to preserve its original meaning, because the original meaning is so unambiguous and so clear. It’s so easy to identify when it’s being correctly used, that the counter pressure usually is successful. So I don’t see any more of that now than in the past. I think that’s just a constant terminological tug of war that’s going on, but mostly the meaning of the term is as strong now as it ever was.
Well, I think it’s as strong now to a set of people that still hold on to that term really strongly, but to be frank I think they’re almost putting blinders on to how so many other people are using it. We’ve talked about this - at what point does that new definition just become the definition because so many people are using it that way?
Yeah, that’s how the language works and I’m totally on board with that, but I guess what I’m saying is I try to see that happening - and a number of people do, and then they actually go where possible… When it’s an organizational source of terminology dilution, they’ll go to that organization and say, “Hey, the term doesn’t mean that. Stop doing that!” and in almost every case the organization reforms their usage, and that’s the only reason that open source still means anything; it’s because that constant process is going on, and I haven’t actually seen the ratio changing that much lately, and of course it’s a very hard thing to gather data on, and Nadia you have been trying to gather data on this and you’ve been out there doing research on this so you might be right, but the blinders are anyway not intentional. We are actually out actively looking for that, and to me it looks like it’s about the same as it ever was, and we just have to stay vigilant.
[01:04:08.23] That’s a nice recap of the problems of people misusing the term or using it for something that’s not within the scope of what open source means. But there’s also a fair amount of - I don’t know how to say this without being mean…
Oh, go for it.
Corporations or projects that are open source within the definition of open source, but aren’t what we would call open.
Actually, I think that’s okay and I don’t care. In other words, if you’re forkable, you’re open source. And if you run the project as a closed society and even the developer’s names are kept top secret, as long as the source code is available and its under an open source license and it could be forked, you’re open source.
You’re thinking more about the future of it, rather than the current reality. Like, even if I can’t get anything done now, if it becomes a big enough problem, I have that option, right?
Well yeah. I mean, the fact that you have that option affects the behavior of the central maintainers, whether they admit it openly or not. The knowledge that your thing can be forked causes you to maintain it differently, even if you never respond to any of the pull requests, you never respond to any of the emails of anyone from outside the maintainer group. The mere fact that someone could fork it forces you to take certain decisions in certain directions so as not to increase the danger of forking, for example. So you still get open source dynamics, even when they’re not visible.
Yeah, that’s a good question, Nadia. I do think that some people put blinders on and try to ignore it, but they tend to get reminded of it. [laughs]
I didn’t hear Nadia’s question, I’m sorry.
I really wonder whether some companies actually see it that way, or whether they’re actually acutely aware of the fear of a fork. Because again, like we talked about network effects, where even if nobody likes the thing anymore, if everybody is using a certain thing, it’s very hard to actually switch off.
Well, it just requires… I mean, for business-to-business open source. Again, Android is a classic example. Google is very aware of the potential for forks; they are very aware of the business implications, to the extent that those are predictable, depending on who might fork it. And indeed, some forks have started to appear, and that is something that gets factored into their decisions as to how they run their copy of the Android project, which so far most companies still socially accept as the master copy, but they are not required to do that. So that means at least the Android Core code is indeed open source, even though it is not run in the way most open source projects are; although I think actually they have taken contributions from the outside. It’s not quite as closed as the tech press indicates it is.
From what I understand of your views, you see it as like the license and these guaranteed freedoms are what makes it open source and that’s all it really matters, because you’re saying if need you could always fork it.
I’m not quite saying that that’s all that really matters, I’m just saying that it’s a main thing… And sure, I would much rather have a project be run by a community, but that potential is always there as long as the open source license is there.
Yeah, the reason why I think collaboration and community is so intertwined is because, again, network effects… And it doesn’t really matter whether something can technically be forked if there is actually no ability to change it, so I worry that relying too much on that core definition could act… It’s sort of like this great hypothetical about whether that really happens. It’s like anyone can create an alternative to Facebook in theory, but no one has successfully created an alternative, because everyone’s on Facebook.
[01:08:04.11] Well, but I don’t think that network effects in an open source development environment are quite the same… Let’s take a couple of examples. GCC got forked years ago. It had a core group of maintainers, and then it had a bunch of revolutionaries who were not happy with how those maintainers were maintaining it. And from the beginning of the project there was no doubt about who this sort of socially accepted master copy was. It was the one maintained by the Free Software Foundation with a technical council that I don’t know how they were selected, but I think Richard Stallman was involved in selecting them, and when these revolutionaries grew increasingly unhappy with technical decision being made and with how contributions were being accepted or not accepted, they had corporate funding, they went off and created EGCS.
EGCS started accepting all those patches that the GCC copy wouldn’t take, and eventually it kind of blew past GCC in terms of technical ability to the point where the FSF said “Well, I guess you’re kind of where stuff is happening now, so we’re just gonna take the next version of EGCS and call that GCC and merge the two, and you won.” And it was totally successful, and it happened because the problems were big enough that people were willing to devote resources to forking and solving them. Could the same thing happen with the Linux kernel? Absolutely. If Linus started making bad decisions, or if he started ticking off too many people and enough kernel developers who had the technical plausibility to launch a fork chose to make a fork - yeah, it would succeed, there’s no question. But it’s just that Linus is running the project well enough that no one needs to do that.
Yeah, I see your point, it is different.
Yeah, but Facebook, on the other hand, that’s a whole different kind of network effect. I don’t mean to completely argue your point away because I think it’s a good one, which is that there are network effects, and it is a lot of effort to fork a popular project that has a successful or at least a cohesive maintenance team and a clear leadership structure.
And you need to have a community that cares enough to fork it. Again, fast-forwarding to some sort dystopian future that I don’t actually know is the future or not, but if open source projects become more about users than about contributors, and people are just sort of using the thing, then it becomes a lot harder to mobilize people to change something. But maybe I’m just sort of making up…
Well, the degree… The ease with which it is possible to motivate people to make a fork or to change something will always be directly proportional to the amount of need for that change. If no one’s motivated to change anything, that just means it’s not important to someone for something to get changed, so why should we care?
Yeah, I don’t know if–People can hate using something… There’s a ton of legacy open source projects that are used in everybody’s code and it’s just really hard to switch out because everyone uses them.
I think the difference though is that there’s just not enough people… Yes, people hate using it, but there’s not enough people that want to be developing on it that can’t, that would then fork it and fix it. And think that there’s a tension here between the people using it and the people that wanna contribute and can’t, or wanna fix this and can’t. And sometimes it really is too difficult to pull that out. But io.js was a pretty successful fork, and that was in large part because there were a lot of people that wanted to contribute that couldn’t, and that wanted to take on ownership of the project and couldn’t. So there was a thriving community actually working on it, and then people that were using were like “Oh great, I can come and use this.”
[01:12:00.15] Unfortunately I don’t know the details of that particular fork, it sounds like you do. If you think there are interesting lessons to draw from it, please explain more.
So I’ve said this on a couple occasions, but I think the size of the user base is proportional… There’s some percentage of that that would contribute, that wanna contribute in some way, and if they’re enabled to, you’ll have a thriving community. If you don’t, you eventually will increase the tension, not just with your overall user base, but also with these people that would be contributing. And eventually, if that tension rises enough, you get a fork.
I think that where that starts to pare down is that when you look at Android, the users of the Android code base are not the users of Android. The users of the Android code base are companies that manufacture phones, for the most part.
And indeed, they started forking Android.
Yes, exactly. So they have the resources to do that, and their needs do not necessarily line up with the needs of Google. The problem is that their needs are in many cases counter to the users of Android, so it puts Google in a strange place where they’re not satisfying the needs of the users of the Android code base, but they are satisfying the needs of the Android end users. If you talk to anybody who uses Android, they’re like “Oh, I have to use the newest Google phone that only takes the Google Android, because the ones where manufacturers have forked them are pretty much terrible.” Except, I heard Java is really good. I think we’re getting into very specific things right now… [laughter]
Well we are, but just to make a quick point about that, in theory, in some sort of long ark of software justice, there should be a link between what those companies are doing with their forks of Android and user’s needs, because otherwise they’re not gonna sell phones. Of course, I would love all those phones to be running a fully open source operating system, and the reasons why they’re not are an interesting topic in their own right, but there should be some connection eventually between those forks and some kind of technical need being solved.
So when you’re looking towards the future though, do you see that tension rising, and users starting to come more in conflict with that model, or are you more pessimistic about it and you feel like the surface is going to continue to be dominated the way that it is now?
I wanna give the optimistic answer, but I have no justification for it. Because software is increasingly being tied to hardware devices, and the hackability for a hardware device is so much… Like, the hacktivation energy, the threshold for hacking on something other than a normal laptop or desktop computer is just so much higher that the ratio in any given pool, in any given user base, the number of those users who will be developers, the percentage is gonna be lower. Just to hack on an Android phone - alright, you’ve gotta setup an Android development environment, you’ve gotta plug into the phone using a special piece of software that gets you into a development environment, and all of that software might be open source, but it’s not like just compiling and running a program and then hacking on the source code and running it again on your laptop. The overhead to get to the point of development is just so much higher. And that’s just phones. Do you think hacking on your car is gonna be easier than that? No, it’s gonna be a lot harder.
I think unfortunately we have to leave it there with this view of a dystopian future… [laughter]
Always happy to make it darker for you.
…but we’ll be back next week. We’re gonna continue with Karl and talk about some much happier things, like contributions and governance models…
Oh, I’ll turn that dark, too.
Oh, okay. [laughter]
Can’t wait!
Our transcripts are open source on GitHub. Improvements are welcome. 💚 | https://changelog.com/rfc/1 | CC-MAIN-2020-16 | refinedweb | 13,073 | 61.09 |
java tutorial 8, main & scheduler threads questions
Hi,
Coming back to Tutorial08.html, it says :
< <<<
It is important to understand that it is possible for a thread to be interrupted at any time by another thread. (…) This can lead to some unstable situations. Consider the following class:
public class threadProblem extends MaxObject {
private int[] data = new int[10];
public void bang() {
int size = data.length;
for (int i=0;i size;i++) {
data[i] = i;
}
}
public void inlet(int i) {
data = new int[i];
}
}
An int sent in the object’s inlet will create a new array of the given length, and a bang will populate the array for numbers. However, consider what would happen if a bang in the main thread (caused by user input, say) was interrupted sometime while executing the bang method and in the scheduler thread a integer was input? If the input is smaller than the previous size of the array, an exception will be thrown shortly after the flow of execution has passed back to the main thread when the for loop exceeds the end of the new array length. (…)
>>>>
.
If i understand well, a [delay 0] after any bang sent to the class would solve the problem in this case, am I right ?
In some others situations, where i’ll need to pass numbers and lists, will [pipe 0] act the same way – "moving" the data from main to scheduler thread – than [delay 0] ?
i think I understood that if i want to sent critical timing events from java to max, i will need to use outletHigh instead of the outlet methods… and i’m then wondering… on the other way, if a kind of "inletHigh" feature (maybe thought a kind of declareInletHigh(number) method) would have been useful then in the above situation, as an alternative to the delay/pipe stuff ?
Any comment or answer welcome,
Thanks,
Alexandre
I would use defer or deferlow rather than delay or pipe.
Not sure if using a {synchronized} block in Java would get around some of these issues, but it would certainly introduce some overhead with respect to object locking/unlocking, which would likely defeat the purpose if you’re in a critical timing situation.
Thanks for your answer…
>> I would use defer or deferlow rather than delay or pipe.
??
I feel you might have missed something : i have read that [delay 0] is "moving" the data from main thread to scheduler thread (higher priority thread), while defer & deferlow are doing in fact the opposite…
(Me, I wanted to know if [pipe] act the same way than [delay], "moving" the data to the higher priority thread, the scheduler thread.)
>> a {synchronized} block in Java
Sorry, i’m might not be a java expert, i’m not sure what you’re talking about…
Forums > Java | https://cycling74.com/forums/topic/java-tutorial-8-main-scheduler-threads-questions/ | CC-MAIN-2015-35 | refinedweb | 469 | 57.95 |
6.1
31 Unix Domain Sockets
This library is unstable; compatibility will not be maintained. See Unstable: May Change Without Warning for more information.
A boolean value that indicates whether unix domain sockets are available and supported on the current platform. The supported platforms are Linux and Mac OS X; unix domain sockets are not supported on Windows and other Unix variants.
Connects to the unix domain socket associated with socket-path and returns an input port and output port for communicating with the socket.
Returns #t if v is a valid unix domain socket path for the current system, according to the following cases:
If v is a path (path-string?), then the current platform must be either Linux or Mac OS X, and the length of v’s corresponding absolute path must be less than or equal to the platform-specific length (108 bytes on Linux, 104 bytes on Mac OS X). Example: "/tmp/mysocket".
If v is a bytestring (bytes?), then the current platform must be Linux, v must start with a 0 (NUL) byte, and its length must be less than or equal to 108 bytes. Such a value refers to a socket in the Linux abstract socket namespace. Example: #"\0mysocket".
Otherwise, returns #f. | http://docs.racket-lang.org/unstable/unix-socket.html | CC-MAIN-2014-42 | refinedweb | 208 | 61.26 |
Eight Tenths Of A Lizard 283
Palin was the first of many like-minded souls to write with this news: "On my weekly check of Mozilla's status, I ran into version .8 of Mozilla, which seems to have been released yesterday. What a nice Valentine's day gift that was. :-)" And Alphix points to the thing itself, and suggests some things to read." Mozilla is my daily-use,pH-balanced Web browser of late, so I'm glad to see that it can finally allow users to avoid the degrading spectacle of endlessly cycling animated .gifs.
Re:Browser good, mailer bad... (Score:2)
I am using Sylpheed [freshmeat.net], it's quite a quite good 3 pane gtk mail client, rather stable, handles international character sets.
It does the job rather well for me. (small, fast, user friendly)
Re:0.8 versus 1.0... (Score:2)
I don't think so. AOL have different needs than mozilla. They needed to get something out, even if it is crappy. Bad press is better than no press. They release bug fixes, and will release a future version based on the mozilla trunk. At the end, they will have a browser as stable as mozilla is, but will have something to show before. Net result will be positive for them *and* for mozilla (if NS6 didn't exist, you would seem hundred of comments explaining why mozilla is a failure because nothing official was released...)
Btw, mozilla is my main browser for the last 3 months. Crashes often, but is better and better.
Cheers,
--fred
You CAN turn off animated gifs in IE (Score:3)
There's all kinds of cool keyboard shortcuts in IE. My favorite is pressing F11 to get full screen mode. Browsing the web in full screen absolutely rocks.
Re:0.8 versus 1.0... (Score:2)
Re:New question... (Score:2)
What's really great (Score:2)
Turning off animated gifs in IE (Score:3)
Tools->Internet Options
Click on the advanced tab.
Under the "multimedia" section, there should be a "play animations" option. Remove the check in the checkbox in front of it.
Viola! No more animations. Although, in retrospect, some slashdot ads don't make a lot of sense now.
Is Mozilla suffering from featuritis here? (Score:3)
While features like disabling animated GIF's and disabling particular sites for popup windows, those nasty evil spamming advertisers are always brilliant at innovating new ways to bypass your filter controls.
Then, what are we to do? To respond to every one of their tricks right in the browser? Or should we separate the job and put it in a proxy server for that purpose?
The way I see it, using a proxy is the way to go. If you've tried Proximitron you'll know why. It's infinitely more configurable - user-configurable. Everyone will has the ability to scratch their own advertiser-induced itches.
Prozilla anyone?
Re:Come on slashdot! (Score:2)
We also know that version numbers are tuples, not fractions, right? Right?
Don't let the man push you around (Score:2)
I don't want to defend Netscape and its shopping buttons. The Mozilla I just dl'd doesn't have them. Most of your comments are about Netscape, so in a sense we're talking past one another. But I'll keep going anyway.j
Corporate IT guys are going to go with IE, at least in the foreseeable future. I wrote a web app for internal use at a real estate support company, and reluctantly had to agree with my boss that going IE only made sense. There are too many differences in DHTML and Javascript across platforms, almost everyone wanted to use IE anyway, and supporting other browsers would have made the project much harder to pull off. XML was a big bonus as well. And we didn't put any animated gifs in our app anyway.
But there's more to life than your cube. People wear bland clothes at work, they sit at ugly desks, and often have uncomfortable chairs. At home we have options, and hopefully we make sure things are better. I don't feel that my work environment ought to dictate how I furnish my apartment. Why should it dictate what kind of software I run?
I want to turn off ads, at least the flashing ones. Maybe that makes me sick and unamerican, but that's how I feel.
Why oh why (Score:2)
They both nicely import my NS4.7x user profiles, giving me one full set of bookmarks, so why the fuck would I then want another complete copy of the same ones imported from IE? I already synchronize them for the very rare occasion I use IE, and you cannot get rid of the IE ones!
I want to submit this to Bugzilla with many flaming comments, but fear it will be rejected
Pope
Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
What, you want *tactful* adult entertainment? (Score:4)
> production one comes to expect from all forms of adult entertainment.
I'm not going to defend animated GIFs here--they're annoying as hell, and on a dialup 56k like most people still are using, they're absolutely evil time-wasters. I recall once being SO annoyed that a page was taking so long to load, because the animated GIF banner at the top must have had thirty friggin frames...
But I just had to ask: what do you expect *other* than sleazy, tactless production in adult entertainment? Proper, tactful production? I can see it now, on PBS's Masterporn Theatre: "Oh, Madam Deepthroat, bring forth thy heaving bosoms of delight, that I might feel them up whilst thou tastest of my knightly schlong, bedewed with premature trickles of nectar as sweet as morning dew. Oh, how I have yearned to taste thy tuna steak of love, whilst you get it on with my handmaiden in some hot girl-girl action..."
Personally, I prefer the straightforward smut of a Dark Bros. or Max Hardcore production to the polished coldness of some silly Skinemax-wannabe soft stuff. But, I digress...
Re:MS is doing the right thing (Score:2)
Re:Don't let the man push you around (Score:2)
Re:New question... (Score:2)
Re:On the topic of animated GIFs... (Score:2)
Re:Konqueror beats the Lizards ass. (Score:2)
BTW, I think Konqueror is an excellent browser.
Re:Choices! OT (Score:2)
Re:MS will exploit IE, and that will push users aw (Score:2)
Turning off animated gifs, true, there hasn't been a menu option for it. But, you can patch your netscape (any version) to play animations only once [hu-berlin.de] and then stop. I'll attach a little bit of C code I wrote that does the job for you.p
Still, it is nice to see the Mozilla guys including a preference to disable animation. Saddly, it seems that many users will never "find" it, judging from the post above.
* and "ANIMEXTS1.0" with different strings, so that
* netscape will be tricked into thinking all animated
* gifs are not to be looped. This is nice, since those
* annoying ads will play once and then stop.
*
* For more info, see this page:
*
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/stat.h>
#define NETSCAPE "/usr/lib/netscape/netscape-communicator"
#define STR1 "NETSCAPE2.0"
#define STR2 "ANIMEXTS1.0"
const unsigned char *memstr(const char *haystack, const char *needle, int size);
int main(int argc, char **argv)
{
int fd, r, pos;
struct stat nsstat;
unsigned char *buf, *p;
r = stat(NETSCAPE, &nsstat);
if (r != 0) {
fprintf(stderr, "File %s doesn't exist\n", NETSCAPE);
exit(1);
}
buf = (unsigned char *)malloc(nsstat.st_size);
if (buf == NULL) {
fprintf(stderr, "Unable to allocate %ld bytes of memory\n",
(long)nsstat.st_size);
exit(1);
}
fd = open(NETSCAPE, O_RDWR);
if (fd < 0) {
fprintf(stderr, "Unable to open %s for read/write access\n",
NETSCAPE);
exit(1);
}
r = read(fd, buf, nsstat.st_size);
if (r != nsstat.st_size) {
fprintf(stderr, "Unable to read %ld bytes from %s\n",
(long)nsstat.st_size, NETSCAPE);
exit(1);
}
p = (unsigned char *)memstr(buf, STR1, nsstat.st_size);
if (p == NULL) {
fprintf(stderr, "Didn't find string \"%s\" within %s\n",
STR);
}
p = (unsigned char *)memstr(buf, STR2, nsstat.st_size);
if (p == NULL) {
fprintf(stderr, "Didn't find string \"%s\" within %s\n",
STR);
}
close(fd);
return 0;
}
const unsigned char *memstr(const char *haystack, const char *needle, int size)
{
const char *p;
int len;
len = strlen(needle);
while (size > 0) {
p = memchr(haystack, *needle, size);
if (p == NULL) return NULL;
size -= (int)(p - haystack);
if (size >= len && memcmp(p, needle, len) == 0) {
return p;
}
p++;
haystack = p;
}
return NULL;
}
Re:MS will exploit IE, and that will push users aw (Score:2)
Re:Why is it taking so long? (Score:2)
> interface, etc
Nonstandard/buggy rendering, runs only on Windows*, all kinds of security problems, etc.
* MacIE is a completely separate code base.
The closed-source Netscape 4 was complete garbage internally and they had to throw it away. It's taken a few years to recover from that.
Have they fixed plug-ins in Linux? (Score:2)
Also, while talking about external programs: how about being able to link to external DOWNLOADERS! I prefer to use NT [krasu.ru] (the program, not the OS) to do my downloads, and I'd like to plug that into Mozilla. I'd like to see
Plugger [hubbe.net] style functionality built-in to Mozilla to allow me to point the lizard at my video and audio format players.
I know what you mean, (Score:2)
I started using it because I had a 14" monitor and wanted to see more page, continued using it even on a 21 incher because it was so sweet. Much better in my opinion than IE's fullscreen.
Would love to see a Mozilla version. Mail them at info@inquare.com [mailto], pressure will make them comply.
Re:MS will exploit IE, and that will push users aw (Score:2)
I don't want to know how I can turn off whichever preferences in a particular version of netscape; I want to know when browsers like netscape will let a user create my own buttons and customize their actions.
OF COURSE I can go to my preferences, but I can't just have one button that does a frequent task. Similarly, I liked the "Font Size" button they had in IE; in Netscape, that might make up for the lack of a "Zoom" feature (Opera and Galeon did this well).
Also, that code you posted is pretty long and ugly; not only would a link have sufficed, but couldn't someone have neatened up their error handling code? I wrote a function in C just for that, and it has greatly reduced the amount of pointless 'if' statements I have had to write, and improved debugging.
...and while I'm being pedantic, why the hell did you put my user name in quotes, "pjrc"?
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Re:SSL = Bad (Score:2)
iCab does that (Score:2)
Re:New question... (Score:2)
Lots of UNIX systems have Netscape installed; they might also have lynx, and around here possibly a few file browsers that double as web browsers, IE for Solaris or HP/UX, Amaya, and a host of other forgotten browsers.
And, AFAIK, Stallman wouldn't be terribly happy with Mozilla, because it isn't GPL'ed. The MPL ain't bad, but I'm sure he'd find something to object to in there. Now *that* would be somewhat amusing.
In short, reply to someone who knows less about the subject next time, Matt.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Re:8/10ths, and I am sad (Score:2)
It only kills the current window. Workaround: open another window, switch the skin from it, then close it.
> It'll get better soon, honest
It did get a *hell* better in the last months. Not really usable for general consumpsion, but it is already my default browser.
> Is the emperor wearing clothes?
About the huge amount of bugs you have, you have to admit that they are not in fundamental parts of the browser. You can browse the web with mozilla with a non proprietary software.
This is huge goodness. Let's say that the emperor have bathclothes..
And thanks for supporting this project. *Everybody* will benefit of it.
Cheers,
--fred
Re:Mozilla is shit (Score:2)
Also, there is an additional stage where interface specifications are compiled into header files. The interface specs allow cross language access to components at runtime.
"Just a browser" today means an entire interpreted, garbage collected language (javascript), HTML, XML and XSL rendering and a ton of other stuff. If you want a lightweight browser, get lynx.
Finally, while mozilla is a browser, it is also a platform for developing OS independant applications with web and web-like technologies (see mozdev.org). It has a scope as broad as a OS.
Eventually, it will compile into both the entire mozilla system *AND* a compact embeddable component, but its not quite there yet (but really close!).
status of Fizzilla? (Score:2)
it's great to see Mozilla shaping up. i've been trying out Mozilla on a regular basis since M4 (yeah, i'm a glutton for punishment), but it's only recently that i've been able to use it on my Windows machine at work as my primary browser.
at home however, i run the MacOS, and while IE 5.0 on MacOS 9 is still the best browser i've ever tried, i now run a later build of MacOS X for development purposes. IE 5.1 on MacOS X is severely lacking (a very poor carbon port), and i'd really like to make the switch to Mozilla on this platform. does anybody know what's going on with Fizzilla [mozilla.org], the MacOS X port of Mozilla? specifically, i'd love to be able to run FizzillaMach [mozilla.org] that uses the UNIX code as a back end and a carbon port of the Mac code as the interface.
right now the latest build of Fizzilla is based on an early January nightly build, and while that's good, it still has some pretty nasty bugs that keep me from using it on OS X. it's a shame because the recent builds of Mozilla have been so good. does anybody know if there is there active development of Fizzilla? are they planning on releasing a new build, perhaps based off of 0.8? on March 24th a lot of people are going to be looking for a Mac OS X-native web browser, and IE is already going to be included in the dock by default. it's going to be important to have the Mozilla alternative available at that time.
:I ..
- j
Re:MS will exploit IE, and that will push users aw (Score:2)
...phil
Re:mailll (Score:3)
setenv MAIL "xterm -e mutt"
for example
Turn off ads (Score:3)
The IJB is available for UNIX, Microsoft Windows, and Linux. Configuration is just a little bit complicated, but no more so than any other standard UNIX daemon.
Alos, there's a truly wonderful program by the name of WebWasher [webwasher.com] that will do that same thing under Microsoft Windows. It's got a very slick interface, awesome features, and some very friendly guys working on it. If you have any Microsoft Windows clients, I would highly recommend installing WebWasher on them.
Definitely check out Squid [nlanr.net] as well. It's a caching proxy server that runs under UNIX and Linux. I've used it for years.
Re:omg (Score:2)
--Asa
Animated GIFS? (Score:2)
I blew away
Re:seems nobody mentioned.... (Score:2)
--Asa
Re:Turn off ads (Score:2)
I've considered porting the whole lot to mozilla, and probably will someday. I'd like to be able to take advantage of Mozilla's parser to do the filtering. Right now the HTML has to be parsed twice; once for the proxy, and once for the browser.
--Bob
Re:On the topic of animated GIFs... (Score:2)
Re:You CAN turn off animated gifs in IE (Score:2)
Ditto for Netscape. Neither will really stop if the site does one of those old push-style animations to rotate banners on the fly...
Fullscreen is something I have wanted in Netscape since 2.0 All this wasted screen real estate telling me what application I'm running, what site I'm on, etc.
I wonder if the memory utilization is down on Mozilla yet... I guess I'll have to give it a shot.
Does anybody know if Mozilla can be coaxed into fullscreen?
Re:MS will exploit IE, and that will push users aw (Score:3)
On NS 4.x and IE 5.5 there is such a button, and it's right there on the toolbar! It's the "Stop" button. Once a page has finished loading, press the stop button. This kills all the flashies dead in their tracks. I do this all the time to get away from the distractions of a xmas tree of gifs most sites have turned into.
The nice part about this is that the sites I frequent get the ad hit, which isn't an option with something like junkbuster. I rather like the notion that those sites are getting the revenue from my visit. Might mean they stay around a little longer and all that.
"...cookies..."
Konqueror is the best I've seen in this regard. Each site that asks for a cookie Konq prompts you for. I know other browsers have this option, but in Konq you can specify to allow or deny all future cookies from a specific domain. It is perhaps Konq's best feature yet.
Actually, I personally don't think we need a button to turn it off. Instead, how about simply removing that damn "window.close()" event entirely from the language? Is there any real use for this event besides throwing gobs of advertising at you as you attempt to leave a site? It's not even effective advertising, as the audience in question isn't going to be looking at the message, but instead how to deal with a browser suddenly out of control.
The other annoying aspect to JavaScript are them pop-up windows. Unfortunately, there are a number of legitimate uses for these making it difficult to say we should just get rid of them entirely. If there were some tool on a bar to deal with these that might be worthwhile. Again, I still don't think that totally disabling JS, even as a switch, is a reasonable solution when there are alternatives that haven't yet been explored.
Re:Faster, Leaner, and Meaner? (Score:2)
On or about september 22 a development branch called MN6 was cut from the Mozilla trunk. This branch was a slower moving branch that resulted a couple months later in the Mozilla code which was at the heart of the Netscape 6 product.
a couple weeks after the MN6 branch was cut there was a Mozilla Milestone made from the trunk. This was M18. The next trunk Milestone was 0.7 in early January and the latest tunk release is 0.8
--Asa
Re:Konqueror beats the Lizards ass. (Score:2)
My only real problem has been that you get "Connect refused" errors a lot more than in Communicator 4.X and that it will mess with your back/forward list of sites visited when you do. So, I don't really see it as a "buggy piece of shit," but could you show me something about feature comparisons between the two?
Re:Animated GIFS? (Score:2)
--Asa
mail servers with ldap support (Score:2)
--
Re:Animated GIFs and Interface Design (Score:2)
Yet another reason why I prefer Netscape 3.0 (Option--uncheck-image-autoload, Option--uncheck-enable-Java/shit) over Netscape 4.x (Edit-prefs-advanced-click/click/click-ok)
The more inconvenient you can make it for the user to toggle shit like Java, proxying, etc., the more likely they are to see the ads.
Now watch my UNIX port of Netscape 4 take 20 seconds to render a pile of tables where Netscape 3 would have done it in less than one.
When Mozilla shows they want to write a great browser, not a "look at how deep we can bury the features" (NS4) or a "k00l, itz gawt sk1nz!" memory hog (XUL)... oh hell, why bother finishing the sentence. We know they won't.
Re:Have they fixed plug-ins in Linux? (Score:2)
However, if you do an LD_PRELOAD=libXt before mozilla-bin gets loaded, then all will be well.
Re:About bloody time... (Score:2)
--Asa
Re:Turn off ads (Score:2)
I stumbled upon something very neat the other day... if you're on a machine where you can't install any of the above (such as at work) or don't want to mess with the browser settings, just go to SafeWeb [safeweb.com].
The nice thing is that it also encrypts everything you view with 128-bit SSL, so you can defeat those who sit around and read weblogs all day at your place of employ.
It has it's annoyances, but it works well for me. At work, I don't have to fear of accidentaly clicking on a link that takes me to pr0n. (Yes, completely unintentional, I assure you.
The only thing is that it's a
Re:Have they fixed plug-ins in Linux? (Score:2)
Re: Text entry widget (Score:2)
I'm reading this thread using Mozilla 0.8 (2001021502) Wallstreet MacOS 8.6 vga out to a Dell monitor, and the widget works fine. Perhaps it's interacting badly with your video card &/or driver?if they would just fix Mac IE's stability
I want to know where the heck IE 5.5 went. It was demoed [appleinsider.com] at MacHack [zdnet.com] last summer. I can only guess they postponed it until the OS X release party.
Re:MS will exploit IE, and that will push users aw (Score:2)
---
Come on slashdot! (Score:2)
--
The last blocker bug... (Score:5)
Relating to slashdot's troubles earlier to-day.
Re:New question... (Score:2)
Actually Mozilla is now released under a dual MPL/GPL license. So RMS should be quite happy with it.
Check it out! [mozilla.org]
Re:Faster, Leaner, and Meaner? (Score:2)
:)
Faster, Leaner, and Meaner? (Score:3)
* Seems faster than the old 0.6 or 0.7 builds. (ie the menus seem zippier).
* Seems to load faster than previous versions.
* Still can't minimize the download window w/o minimizing the entire Mozilla app.... (what gives?)
* Still having trouble installing JRE 1.3
Overall, 0.8 seems to be faster, and generally better than any of the previous builds.
Gururise
Garden Grove Real Estate [erachampion.com]
What happened to LiveConnect? (Score:2)
Re:0.8 versus 1.0... (Score:2)
I sitll agree, netscape 6 sucks though.
P.S. This release was 2 days late! It was forcast for the 12th feb! (Well, arround).
Good job guys, keeping to a schedule. Should look forward to 0.9 about the 3rd week of march then, and hopefully 1.0 at the end of April.
Re:Have they fixed plug-ins in Linux? (Score:2)
PGP going into mozilla too (Score:2)
New question... (Score:5)
The real issue is, what will happen to Netscape? They aren't losing the browser war now because of Mozilla. Now it's because of AOL, who makes every stable Mozilla release into a horribly patched, rushed Netscape release with extra annoying commercial features and bundling that none of us want or need.
Also, despite the benefits Mozilla has seen due to Open Source development, I doubt it will do as well without Netscape, as gutted as it is. JWZ said that the benefits gained from opening a project like that is about 30%, which means that 70% of the work has to be done by AOL/Netscape/Time Warner, and if AOL loses this war to Microsoft, we might lose a lot of developers.
Also, it sucks seeing a great team of people turn into a large impersonal entity that no one really likes. As the Open Source community is already developing other browsers, it isn't clear how much work will be put into Mozilla, and how much will be spent reinventing the wheel.
I only hope that a truly impressive, usable browser comes out of all this: one that doesn't annoy me and show me ads, but rather lets me tell it what I want it to do. Being able to set a level of HTML compliance would be nice, as well.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Re:Have they fixed plug-ins in Linux? (Score:2)
Where are you getting your plugin? Is it the actual Macromedia plug in or a clone?
What about other plug-ins?
Re:MS will exploit IE, and that will push users aw (Score:2)
Of course if they did, they would be instantly crucified for their anti-competitive action of not letting other companies derive advertising revenue, noting that microsoft derives no revenue from banner ads, yadda yadda yadda.
Go run Proximitron (no link, I'm lazy, use Google. If I could use everything2 links, that'd be nice, but e2 seems to be doing worse than slashdot these days). Don't mind the hideous interface (there's an option to turn it off, then it becomes merely idiosyncratic), it's otherwise a great tool. I use it to look at the various headers when I do server work. It can do all kinds of filtering and transforms on headers and content, which includes blocking sites, cookies, etc. Has built-in filters to animate gifs only once, as well as popup-stoppers that don't turn off all javascript, etc.
--
Re:Faster, Leaner, and Meaner? (Score:2)
Blackdown's JRE was working for some folks last I heard so if you've got that give it a try.
--Asa
How do you disable "tooltips"? (Score:2)
How do you disable the floating tips that appear when your mouse strays over the back button too long? Under Netscape 6 for Solaris, I've seen these floating windows remain on the desktop even when Netscape is iconified.
I found a setting for "browser.chrome.toolbar_tips - false" on developer.netscape.com, but this doesn't seem to work.
I'd use Mozilla, but I like to use Solaris x86 at work, and I don't see binaries for this platform. Yes, I know I could build it...
Animated GIFs and Interface Design (Score:5)
I've watched naive users (e.g. my parents) use a browser. When faced with an agonizing animated GIF, giant blink text, or horrible background, they move the mouse to the offending item, and try to turn it off. This is, of course, in keeping with the GUI concept: select the item, then manipulate it, perhaps with a right mouse click. This corresponds deeply with reality: if a mosquito is biting me, I focus on it and take action.
Browser interfaces are often counter-intuitive because the cure is hidden in deep menu items, e.g. edit->prefs->advanced->.... Users rarely find these things, and if they do, don't know what they do. My dad doesn't want to disable all Java apps, he just wants to stop the pain he is experiencing on the page he is currently visiting. To make a browser great, watch your new users very closely.
Re:Kill the gifs! (Score:2)
Mozilla is shit (Score:2)
Jesus Fucking Christ, the source tree is 150M large. I left it building overnight, running under time(1), and when I woke up, I found out that it took 58 minutes to build. 58 minutes! That's just a hair shorter than it takes to build my base FreeBSD system from scratch! I thought Mozilla was supposted to be a browser, for Christ's sake!
So after balking at the exceptionally long build time, I ran du to find out just how large this pile of shit is... with object files, binaries, and source (remember, 150M), the tree was 1.4G. That's very close to the the size of my
/usr...
Not to mention the fact that it still doesn't render pages right, it takes a while to render widgets, and it crashes.
The team ought to focus on making a lean, fast, quality browser... every time I try to build this thing, it gets bigger... and for what?
A new year calls for a new signature.
Choices! (Score:4)
Re:The last blocker bug... (Score:2)
Re:Choices! (Score:2)
But the DEB package depends on libqt2.2-gl. I don't have the library installed, but it turns out that everything works for me.
You've misread the dependency line. It is this (adding some white space):Depends: libc6 (>= 2.1.2), libjpeg62, liblcms (>= 1.06-0), libmng (>= 0.9.3-0),
libpng2, libqt2.2 (>= 2:2.2.3-0.potato4) | libqt2.2-gl (>= 2:2.2.3-0.potato4), libstdc++2.10,
libz1, xlib6g (>= 3.3.6-4)
So, it depends on either libqt2.2 or libqt2.2-gl. You apparently have libqt2.2 installed.
However, it keeps slowing down itself over time until I have to exit the browser and run it again.
This is a known bug, introduced in the switch from 4.0-betaX to 5.0-betaX. Check the Linux opera newsgroup [opera.no] for some reports of it.
Heresy! (Score:2)
Only Linux truly loves you. Human beings just can't grep all the processes you're going through.
Re:The last blocker bug... (Score:2)
Index: nsAppRunner.cpp
RCS file:
retrieving revision 1.263
diff -u -r1.263 nsAppRunner.cpp
--- nsAppRunner.cpp 2001/02/12 21:16:02 1.263
+++ nsAppRunner.cpp 2001/02/16 00:11:44
@@ -1195,6 +1195,7 @@
int main(int argc, char* argv[])
{
+ NS_ASSERTION(bSlashdotRunning,"Can't start without SlashDot");
#if defined(XP_UNIX)
InstallUnixSignalHandlers(argv[0]);
#endif
---
Re:The last blocker bug... (Score:5)
Fuck, that's funny. So, go on: 95 posts saying that its bloatware and why can't we have a lighter browser; 47 pointing out the obvious and saying that Netscrape 4.7.2 leaks memory; 22 posts on the subject of IE being better; At least some figting pointlessly over whether the UI stinks, or it's just that we don't understand how important XML is...
Good work, Mozilla dudes.
Dave
Re:Kill the gifs! (Score:2)
the library Mozilla uses for animated gifs is in fact called 'libpr0n'[gjw@snoopy mozilla]$ find . -name '*pr0n*' -print
[gjw@snoopy mozilla]$ find . -name '*pron*' -print
[gjw@snoopy mozilla]$ find . -name '*porn*' -print
[gjw@snoopy mozilla]$ find . -name '*p0rn*' -print
[gjw@snoopy mozilla]$
Nice thought, but please verify rumors before spreading them.
Re:MS will exploit IE, and that will push users aw (Score:2)
Just curious here, but I'm left wondering what kinds of things you do at the triggering of a window.close() event? It seems to me to be pretty rare for even GUI apps to need this unless they're doing some kind of memory clean up. Browser based apps generally don't need this kind of thing, so I'm just left here curious as to the need still yet.
Kill the gifs! (Score:2)
Good riddance and good night.
Re:I've been Netscape free for a couple days now.. (Score:2)
Re:SSL = Bad (Score:2)
Am I the only person who things SSL is the most screwed up thing about this program?
No, you're not. Take a look at bug #60912 [mozilla.org] and bug #31174 [mozilla.org], for starters.
Re:mailll (Score:2)
setenv MAIL "xterm -e mutt"
Nice thought, but $MAIL is already used to point to your mailbox --
/var/spool/mail/username or /home/username/Maildir/ or what have you.
How about using $MUA for text MUAs like mutt, or... hmm... maybe $X11_MUA for X11-based MUAs like "xterm -e mutt"?
8/10ths, and I am sad (Score:2)
It's still got so many bugs. The text entry widget is broken. It kills Windows dead (real hard, I know). Changing the skin kills the menus (File and Edit works, everything after View doesn't). It crashed getting my POP email. On and on and on.
Did I get a bad build (build ID 2001021503)? Is my machine misconfigured? What the hell is going on?
I don't want to make this sound like a troll or flamebait. Its really not, in my mind. Its the plaintive wail of someone who has spent the past year or so trying to tell his co-workers, friends, and random people on the street to support this project, "It'll get better soon, honest". It is better now, to be sure. It hasn't crashed in the last 8 minutes or so its been running on this machine. Joy. It hasn't finished rendering the submit page, and for that matter it never seems to finish (looking at the stdout in the xterm above for the past few pages I've loaded).
Will 1.0 actually work? Is the emperor wearing clothes?
Funny how people do opposite of what Mozilla says (Score:2)
">Mozilla is my daily-use,pH-balanced web browser of late
Of course, the people at Mozilla were just saying don't set the version 0.8 as the default browser - perhaps they're simply recommending waiting a few builds for more bug fixes.
On another note, I was encouraged by this:
"Mozilla development work now is focused on bug squashing, improved stability, and better performance."
Nice to see that performance is being worked on - that is my main critism of most browsers for Linux (Konqueror is fast, but it wouldn't properly display a few pages for me in KDE 2.0.1, so I'll wait a few versions). For me, I'm still stuck with Netscrape 4.7 on both my Win comp and my Linux comp, because I want the same browser for both, yet reasonably full featured and psuedo-reliable (Netscape crashes maybe once every two weeks of hard use, not too bad for me).
Anyway, it's promising to see Mozilla and Konqueror coming along nicely. Good work.
Re:Mozilla is shit (Score:2)
--disable-debug
--disable-dtd-debug
--enable-strip-libs
--disable-mailnews (!!)
--disable-tests
--enable-optimize(=flag)
etc. etc. Don't tell me how big it is, use your options to make it smaller. You *do* have the source, after all.
Re:Wow... (Score:2)
Probably it could be done in less. Depends on how minimal you want to be. Most people would want it to have all the fancy do-dads like being able to download stuff.
Also the other poster is right when he says that gtk-mozembed doesn't have cookie support.
Re:MS will exploit IE, and that will push users aw (Score:2)
> let a user create my own buttons and customize
> their actions.
If you don't mind hacking XML/Javascript it's dead easy to do this in Mozilla right now, without having to recompile anything. Take a peek inside the JAR files you downloaded. (They're just ZIP files by another name.)
> Similarly, I liked the "Font Size" button they
> had in IE
There's a menu item
If Timothy is so opposed to animated GIFs... (Score:4)
Sorta funny to slam one of Slashdot's only revenue streams...
Re:MS will exploit IE, and that will push users aw (Score:2)
Yeah (Score:2)
The Microsoft optical 2 button + wheel mose is the only good M$ product I've ever encounterd. If they only made it in mac colors...
Re:Funny how people do opposite of what Mozilla sa (Score:2)
um. no the people at Mozilla did not say that. a guy from the webpages mozillaquest.com said that.
--Asa
Browser good, mailer bad... (Score:4)
The widgets for lists and trees are terrible in Mozilla (at least on Unix), and it really makes me wish that the Moz folks had decided to stay with Gtk+ for the toolkit, rather than rolling their own for the sake of portability.. I'm not sure they knew what they were getting into with a new toolkit, especially since they'll probably have to deal with the same things that the Pango [pango.org] folks are..
Anyway, back to my initial query -- what are people using instead? There have been a number of clients based on toolkits like Tk (blech) and even straight Athena widgets (triple blech). The nicer-looking clients (IMHO) seem to be all glam and no substance.. What's up with that?
If someone can find me a 3-pane Gtk+ or Gnome GUI client that is stable and that can handle PGP/GPG, I'd be forever grateful.
--
Re:8/10ths, and I am sad (Score:4)
I would suggest watching [mozillazine.org] and getting or not getting builds based on the excellent comments Asa puts up there.
I'm not sure why your text entry widget wasn't working; if you could file a bug report on it ( [mozilla.org]) that would be great. The menu bug is a very recent regression and is being worked on.
Re:mailll (Score:3)
That said, you are not the only one who wants this functionality. See [mozilla.org]
.
The discussion on that bug includes a way to fix it using Protozoilla. This fix is currently being considered for inclusion in the main source tree.
Re:Faster, Leaner, and Meaner? (Score:3)
Re:Browser good, mailer bad... (Score:4)
-Ben Goodger
-Netscape Navigator
mailll (Score:3)
<rant>.
why on earth does mozilla not let me link my <a href="mailto:blah"> to another app? this just seems absolutely stupid!
cause honestly, who would want to use mozilla mail when you could use something like pronto.
</rant>
im refering to the linux version, havent tried any other os's in a few years
"Who is General Failure and why is he reading my hard disk ?"
You know what would make me switch to Moz? (Score:3)
--
We are In an internet world with no borders nor fences, who needs windows and gates...
MS will exploit IE, and that will push users away (Score:3)
IE is a great browser, but it lacks some important features. It's hard to control javascript, for example, and you can't turn off animated gifs. I don't think that's accidental. If you let people turn off the ads, the advertisers won't be happy, and as a good multi-national corporate citizen, MS probably won't want to do anything to jeopardize the platform's value to advertisers.
There's no way (at least no easy way) to convert a real video file into something you can edit or recompress. Why? It's a feature that content providers want. To me it's a bug. I can understand Real doing that, and having a proprietary data format comes in handy.
More and more I think we're going to see these large companies deliberately crippling our tools for the benefit of content providers. But that only works with proprietary data formats and protocols. The web is still open.
The big story in advertising is pop-up windows. If Mozilla bills itself as the browser that helps you defeat that annoying ads, a lot of people will respond to it. And a lot of people will put up with annoying little errors as they get worked out, because the pop-up windows are incredibly annoying. MS isn't going to do that. They'll never side with their users over the content providers. That leaves a niche.
As for me, the ability to turn off animated gifs will be enough to make me switch. Those things really bug me.
All of these ads are going to get worse and worse. Mozilla should bill itself as the answer. It is the answer. And we need it.
Re:You know what would make me switch to Moz? (Score:3)
You can even disable opening new windows when a website specifies target="_blank" on a link.
Stuart.
Re:MS will exploit IE, and that will push users aw (Score:3)
For instance, why can't I bind a button to turn off animated gifs, cookies, and JavaScript? Microsoft considered making a similar button in IE, but stopped when people started calling it "The Porn Button". But if that's what users want, they should be able to do it.
The web is becoming overrun with proprietary data formats and protocols, but at least the open ones do get more popular. Notice the popularity of mp3's, Shockwave Flash, DivX-encoded movies, and mpegs. That's because there are at least players out there for everyone, and the tools aren't too hard to find.
Pop-up windows and banners don't necessarily work; web advertising needs a different model that doesn't involve annoying the consumer. Maybe product placement would work somewhat better, or text ads like Google, or little "sponsored by" buttons.
Personally, I use junkbuster to get rid of ads; it's also cross-platform, and cross-browser compatible, and works rather well.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Pop-up disabling now possible (Score:5)
Well, then it's time to switch to Moz. Quoting the 0.8 release notes [mozilla.org]:
There are several new hidden prefs (UI will be added eventually) to turn off various annoying features on web pages:
user_pref("capability.policy.popupsites.sites", "");
e rnal.open","noAccess");
user_pref("capability.policy.popupsites.windowint
user_pref("capability.policy.default.windowintern
user_pref("browser.target_new_blocked", true);
Cheers,
-j. | http://tech.slashdot.org/story/01/02/16/0254209/eight-tenths-of-a-lizard | CC-MAIN-2013-20 | refinedweb | 7,096 | 75.2 |
# Python3 program to calculate the
# sum of nodes at the maximum depth
# of a binary tree
# Helper function that allocates a
# new node with the given data and
# None left and right poers.
class newNode:
# Constructor to create a new node
def __init__(self, data):
self.data = data
self.left = None
self.right = None
# Function to return the sum
def SumAtMaxLevel(root):
# Map to store level wise sum.
mp = {}
# Queue for performing Level Order
# Traversal. First entry is the node
# and second entry is the level of
# this node.
q = []
# Root has level 0.
q.append([root, 0])
while (len(q)):
# Get the node from front
# of Queue.
temp = q[0]
q.pop(0)
# Get the depth of current node.
depth = temp[1]
# Add the value of this node in map.
if depth not in mp:
mp[depth] = 0
mp[depth] += (temp[0]).data
# append children of this node,
# with increasing the depth.
if (temp[0].left) :
q.append([temp[0].left,
depth + 1])
if (temp[0].right) :
q.append([temp[0].right,
depth + 1])
# return the max Depth sum.
return list(mp.values())[-1]
# Driver Code
if __name__ == ‘__main__’:
# Let us construct the Tree
# shown in the above figure
root = newNode(1)
root.left = newNode(2)
root.right = newNode(3)
root.left.left = newNode(4)
root.left.right = newNode(5)
root.right.left = newNode(6)
root.right.right = newNode(7)
print(SumAtMaxLevel(root))
# This code is contributed by
# Shubham Singh(SHUBHAMSINGH10)
22
- Largest value in each level of Binary Tree | Set-2 (Iterative Approach)
- Check for Symmetric Binary Tree (Iterative Approach)
- Iterative approach to check if a Binary Tree is Perfect
- Get level of a node in binary tree | iterative approach
- Iterative approach to check for children sum property in a Binary Tree
- Deepest right leaf node in a binary tree | Iterative approach
- Deepest left leaf node in a binary tree | iterative approach
- Construct Binary Tree from given Parent Array representation | Iterative Approach
- Count full nodes in a Binary tree (Iterative and Recursive)
- Count half nodes in a Binary tree (Iterative and Recursive)
- Iterative program to count leaf | https://www.geeksforgeeks.org/sum-of-nodes-at-maximum-depth-of-a-binary-tree-iterative-approach/ | CC-MAIN-2019-30 | refinedweb | 349 | 58.38 |
friendly.
It's been almost a year since I introduced you to Groovy with the article "Feeling Groovy" in the alt.lang.jre series. Since then, Groovy has matured quite a bit through a number of releases that have progressively addressed problems in the language implementation and feature requests from the developer community. Finally, Groovy took a gigantic leap this past April, with the formal release of a new parser aimed at standardizing the language as part of the JSR process.
In this month's installment of Practically Groovy, I'll celebrate the growth of Groovy by introducing you to the most important changes formalized by Groovy's nifty new parser; namely variable declarations and closures. Because I'll be comparing some of the new Groovy syntax to the classic syntax found in my first-ever article on Groovy, you may want to open up "Feeling Groovy" in a second browser window now.
Why change things?
If you've been following Groovy for any amount of time, whether you've been reading articles and blogs or writing code yourself, you may have gotten wind of one or two subtle issues with the language. When it came to clever operations such as object navigation, and particularly closures, Groovy suffered from occasional ambiguities and an arguably limiting syntax. Some months ago, as part of the JSR process, the Groovy team began working on resolving these issues. The solution, presented in April with the release of groovy-1.0-jsr-01, was an updated syntax and a new-syntax-savvy parser to standardize.
The good news is that the new syntax is chock full of enhancements to the language. The other good news is that it isn't that drastically different from the old. Like all of Groovy, the syntax was designed for a short learning curve and a big payoff.
Of course, the JSR-compliant parser has rendered some now "classic" syntax incompatible with the new Groovy. You can see this for yourself if you try running a code sample from an early article in this series with the new parser: It probably won't work! Now, this may seem a little strict -- especially for a language as freewheeling as Groovy -- but the point of the parser is to ensure the continued growth of Groovy as a standardized language for the Java platform. Think of it as a helpful tour guide to the new Groovy.
Hey, it's still Groovy!
Before getting too far into what's changed, I'll take a second to chat about what hasn't. First, the basic nature of dynamic typing hasn't changed. Explicit typing of variables (that is, declaring a variable as a String or Collection) is still optional. I'll discuss the one slight addition to this rule shortly.
String
Collection
Many will be relieved to know that semicolons are also still optional. Arguments were made for and against this syntactic leniency, but in the end the less-is-more crowd won the day. Bottom line: You are still free to use semicolons if you want to.
Collections have also stayed the same for the most part. You can still declare list-like collections using the array syntax and maps the same way you always have (that is,
the way you first learned in "Feeling Groovy"). Ranges, on the other hand, have changed slightly, as I'll soon demonstrate.
list
map
Finally, the Groovy additions to standard JDK classes haven't changed
a bit. Syntactic sugar and nifty APIs are intact, as in the case of normal-Java File types, which I'll show you later.
File
Variably variables
The rules on Groovy variables have probably taken the hardest hit with the
new JSR-compliant syntax. Classic Groovy was quite flexible (and indeed terse) when it came to variable declarations. With the new JSR Groovy, all variables
must be preceded with either the def keyword or a modifier such as private, protected, or public. Of course, you can always declare the variable type as well.
For example, when I introduced you to GroovyBeans in "Feeling Groovy," I defined a type called LavaLamp in that article's Listing 22. That class is no longer JSR compliant and will result in parser errors if you try to run it. Fortunately, migrating the class isn't hard: All I had to do is add the }"
myLamp = new LavaLamp()
myLamp.baseColor = "Silver"
myLamp.setLavaColor("Red")
println "My Lamp has a ${myLamp.baseColor} base"
println "My Lava is " + myLamp.getLavaColor()
Not so bad, right?
As described above, the def keyword is required for any variable that doesn't otherwise have a modifier,(" ")
}
}
"first name: " + fname + " last name: " + lname +
" age: " + age + " address: " + address +
" contact numbers: " + numstr.toString()
}
}
Recognize that code? It's borrowed from Listing 1 of "Stir some Groovy into your Java apps." In Listing 3, you can see the error message that will pop up if you try to run the code as is:
c:\dev\projects>groovy BusinessObjects.groovy
BusinessObjects.groovy: 13: The variable numstr is undefined in the current scope
@ line 13, column 4.
numstr = new StringBuffer()
^
1 Error
The solution, of course, is to add the def keyword to numstr in the
toString method. This rather deft solution is shown in Listing 4.
Closing in on closures
The syntax for closures has changed, but mostly only with regard to parameters. In classic Groovy, if you declared a parameter to your closure you had to use a | character for a separator. As you
probably know, | is also a bitwise operator in normal Java language; consequently, in Groovy, you couldn't use the | character unless you were in the context of a parameter declaration of a closure.
|
You saw classic Groovy parameter syntax for closures in Listing 21 of "Feeling Groovy," where I demonstrated iteration. As you'll recall, I utilized the find method on collections, which attempted to find the value 3. I passed in the parameter x, which represents the next value of the iterator (experienced Groovy developers will note that x is entirely optional and I could have referenced the implicit variable it). With JSR Groovy, I must drop the | and replace it with the Nice-ish -> separator, as shown in Listing 5 below:
find
x
iterator
it
->
[2, 4, 6, 8, 3].find { x ->
if (x == 3){
println "found ${x}"
}
}
Doesn't the newer closure syntax remind you of the Nice language's block syntax? If you are not familiar with the Nice language, check out Twice as Nice, another of my contributions to the alt.lang.jre series.
As I mentioned earlier, Groovy's JDK hasn't changed. But as you've just learned, closures have; therefore, the way you use those nifty APIs in Groovy's JDK have also changed, but just slightly. In Listing 6, you can see
how the changes impact Groovy IO; which is hardly at all:
import java.io.File
new File("maven.xml").eachLine{ line ->
println "read the following line -> " + line
}
Reworking filters
Now, I hate to make you skip around a lot, but remember how in "Ant Scripting with Groovy" I spent some time expounding on the power and utility of closures? Thankfully, much of what I did on the examples for that column is easy to rework for the new syntax. In Listing 7, I simply add")
So far so good, don't you think? The new Groovy syntax is quite easy to pick up!
Changes to ranges
Groovy's range syntax has changed ever so slightly. In classic Groovy, you could get away with using the ... syntax to denote exclusivity, that is, the upper bound. In JSR Groovy, you'll simply drop that last dot (.) and replace it with the intuitive < symbol.
...
.
<
Watch as I rework my range example from "Feeling Groovy" in Listing 8 below:
myRange = 29..<32
myInclusiveRange = 2..5
println myRange.size() // still prints 3
println myRange[0] // still prints 29
println myRange.contains(32) // still prints false
println myInclusiveRange.contains(5) // still prints true
Ambiguous, you say?
You may have noticed, while playing with Groovy, a subtle feature that lets you obtain a reference to a method and invoke that reference at will. Think of the method pointer as a short-hand convenience mechanism for invoking methods along an object graph. The interesting thing about method pointers is that their use can be an indication that the code violates the Law of Demeter.
"What's the Law of Demeter," you say? Using the motto Talk only to immediate friends, the Law of Demeter states that we should avoid invoking methods of an object that was returned by another object's method. For example,
if a Foo object exposed a Bar object's type, clients could access behavior of the Bar through the Foo. The result would be brittle code, because changes to one object would ripple through a graph.
Foo
Bar
Bar
A respected colleague wrote an excellent article entitled "The Paperboy, the Wallet, and the Law of Demeter" (see Resources). The examples in the article are written in the Java language; however, I've redefined them below using Groovy. In Listing 9, you can see how this code demonstrates the Law of Demeter -- and how it could be used to wreak havoc with people's wallets!
package com.vanward.groovy
import java.math.BigDecimal
class Customer {
@Property firstName
@Property lastName
@Property wallet
}
class Wallet {
@Property value;
def getTotalMoney() {
return value;
}
def setTotalMoney(newValue) {
value = newValue;
}
def addMoney(deposit) {
value = value.add(deposit)
}
def subtractMoney(debit) {
value = value.subtract(debit)
}
}
In Listing 9 there are two defined types -- a Customer and a Wallet. Notice how the Customer type exposes its own wallet instance. As previously stated, the code's naive exposures present issues. For example, what if I (as the original article's author did) added in an evil paperboy to ravage unsuspecting customer wallets? I've used Groovy's method pointers for just this nefarious purpose in Listing 10. Note how I am able to grab a reference to the subtractMoney method via an instance of Customer with Groovy's new & syntax for method pointers.
mymoney = victim.wallet.&subtractMoney
mymoney(new BigDecimal(2)) // "I want my 2 dollars!"
mymoney(new BigDecimal(25)) // "late fees!"
Now, don't get me wrong: Method pointers aren't meant for hacking into code or obtaining references to people's cash! Rather, a method pointer is a convenience mechanism. Method pointers are also great for reconnecting with your favorite 80s movies. They can't help you if you get those lovable cute furry things wet, though! In all seriousness, think of Groovy's println shortcut as an implicit method pointer to System.out.println.
println
System.out.println
If you were paying careful attention you will have noted that JSR Groovy requires me to use the new & syntax to create a pointer to the method subtractMoney. This addition, as you've probably guessed, clears up ambiguities in classic Groovy.
And here's something new!
It wouldn't be fun if there wasn't anything new in Groovy's JSR releases, would it? Thankfully, JSR Groovy has introduced the as keyword, which is a short-hand casting mechanism. This feature goes hand-in-hand with a new syntax for object creation, which makes it easy to create non-custom classes in Groovy with an array-like syntax. By non-custom, I mean classes found in the JDK such as Color, Point, File, etc.
as
Color
Point
In Listing 11, I've used the new syntax to create some simple types:
def nfile = ["c:/dev", "newfile.txt"] as File
def val = ["http", "", "/"] as URL
def ival = ["89.90"] as BigDecimal
println ival as Float
Note that I created a new File and URL, as well as a BigDecimal using the short-hand syntax, as well as how I was able to cast the BigDecimal type to a Float using as.? | http://www.ibm.com/developerworks/java/library/j-pg07195.html | crawl-001 | refinedweb | 1,985 | 63.29 |
Understand Genetic Algorithm with overfitting example
Get FREE domain for 1st year and build your brand new site
In this article, we have explored the ideas in Genetic algorithms (like crossover, Simulated Binary Crossover, mutation, fitness and much more) in depth by going though a Genetic algorithm to reduce overfitting.
Given coefficients of features corresponding to an overfit model the task is to apply genetic algorithms in order to reduce the overfitting.
The overfit vector is as follows:
[0.0, 0.1240317450077846, -6.211941063144333, 0.04933903144709126, 1.0.03810848157715883, 2.8.132366097133624e-05, 3.-6.018769160916912e-05, 4.-1.251585565299179e-07, 5.3.484096383229681e-08, 6.4.1614924993407104e-11, 7.-6.732420176902565e-12]
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural evolution.
The implementation is as follows:
- Initialize a population
- Determine fitness of population
- Until convergence repeat the following:
- Select parents from the population
- Crossover to generate children vectors
- Perform mutation on new population
- Calculate fitness for new population
Each population contains multiple individuals where each individual represents a point in search space and possible solution.
Every individual is a vector of size
11 with
11 floating point values in the range of
[-10, 10].
The theoretical details of each stage are given here:
Selection
The idea is to give preference to the individuals with good fitness scores and allow them to pass there genes to the successive populations.
Crossover
This represents mating between individuals to generate new individuals. Two individuals are selected using selection operator and combined in some way to generate children individuals
Mutations
The idea is to insert random genes in offspring to maintain the diversity in population to avoid the premature convergence.
Algorithm And Code Explanation
The first population is created using an initial vector where all genes are initialized to zero. Copies of this vector are made on which I mutate to generate a population of size
POPULATION_SIZE.
For each vector, at every index mutation is performed with a probability of
3/11. Then the value at that index is replaced with the value at the overfit vector at that index multiplied by some factor chosen uniformly between (0.9, 1.1).
The fitness of the population is an arithmetic combination of the train error and the validation error.
After the population is initialized, a mating pool is made (of
MATING_POOL_SIZE) containing the best fitness parents.
The mating pool is selected from the population by sorting the population based on its fitness and then selecting
MATING_POOL_SIZE number of vectors from the top.
def create_mating_pool(population_fitness): population_fitness = population_fitness[np.argsort(population_fitness[:,-1])] mating_pool = population_fitness[:MATING_POOL_SIZE] return mating_pool
Then parents are uniformly chosen from the mating pool as follows:
parent1 = mating_pool[random.randint(0, MATING_POOL_SIZE-1)] parent2 = mating_pool[random.randint(0, MATING_POOL_SIZE-1)]
A Simulated Binary Crossover is then performed on the parents to generate an offspring.
This is followed by mutation of chromosomes, details of which are given here.
The new population is created by choosing
X top children generated and
POPULATION_SIZE - X top parents.
def new_generation(parents_fitness, children): children_fitness = calculate_fitness(children) parents_fitness = parents_fitness[:FROM_PARENTS] children_fitness = children_fitness[:(POPULATION_SIZE-FROM_PARENTS)] generation = np.concatenate((parents_fitness, children_fitness)) generation = generation[np.argsort(generation[:,-1])] return generation
This process is repeated and the values are stored in a JSON file from which I read the next time I commence.
As you can see the code is vectorized and completely modular as separate functions have been written for each significant step.
Iteration Diagrams
First Iteration
Second Iteration
Third Iteration
Initial-Population
At first, I used the overfit vector as the initial vector to create our initial population. .
However, even after running the GA many times with different variations, I got stuck at a local minima (errors stopped improving).
So I decided to bring in some randomization in our initial vector.
I tried many things and at one point even initialized each gene of the vector with a random number between (-10, 10) . However, I soon realized that this would not work as the search space would become huge and convergence time too large.
I had to apply heuristics for our choice of initial vector
I noticed that many genes were of lower orders such as 1e-6, 1e-7.., 1e-12 and so on. Hence, I initialized all genes of the first vector to 0 during our trial and error process.
I applied mutations to this that were equivalent to some factor multiplied by the overfit vector value at that index. This would increase our probability of reaching a global minima.
Voila!
Our heuristic worked as the fitness dropped to 600K in the very first run! When I started with overfit vector, the least it reached was 1200K.
Fitness
In the fitness function,
get_errors requests are sent to obtain the train error and validation error for every vector in the population.
The fitness corresponding to that vector is calculated as
absolute value of 'Train Error * Train Factor + Validation Error'
for i in range(POPULATION_SIZE): error = get_errors(SECRET_KEY, list(population[i])) fitness[i] = abs(error[0]*train_factor + error[1])
I changed the value of the
train_factor from time to time, depending on what our train and validation errors were for that population. This helped us achieve a balance between the train and validation error which were very skewed in the overfit vector.I kept changing between the following 3 functions according to the requirement of the population (reason for each function mentioned below).
Train factor = 0.7
I used
train_factor = 0.7 to get rid of the overfit . The initial vector had very low train error and more validation error. With our fitness function, I associated less weight to the train error and more to validation error, forcing validation error to reduce yet not allowing train error to shoot up. I also changed it to values 0.6 and 0.5 in the middle to get rid of the overfit faster. 0.7, however, worked best as it did not cause train error to rise up suddenly, unlike its lower values.
Train factor = 1
The fitness function now became a simple sum of train and validation error.
This was done when the train error became significantly large. At this point, I wanted to balance both errors and wanted them to reduce simultaneously so I set the fitness function as their simple sum.
Train factor = -1
This was done when the train error and the validation errors each had reduced greatly. However, the difference between them was still large, despite their sum being small.
So I made the fitness function the absolute difference between the train and validation error. This was done so that the errors would reach a similar value (and hence generalize well on an unseen dataset).
This function ensured the selection of vectors that showed the least variation of error between the training and validation sets.
Crossover
Single point crossover
Initially, I implemented a simple single point crossover where the first parent was copied till a certain index, and the remaining was copied from the second parent.
However, this offered very little variations as the genes were copied directly from either parent. I read research papers and found out about a better technique (described below) that I finally used.
Simulated-Binary-Crossover
The entire idea behind simulated binary crossover is to generate two children from two parents, satisfying the following equation. All the while, being able to control the variation between the parents and children using the distribution index value.
The crossover is done by choosing a random number in the range
[0, 1). The distribution index is assigned its value and then $\beta$ is calculated as follows:
Distribution index that determines how far children go from parents. The greater its value the closer the children are to parents.
The distribution index is a value between
[2, 5] and the offsprings are calculated as follows:
The code is as shown:
def crossover(parent1, parent2): child1 = np.empty(11) child2 = np.empty(11) u = random.random() n_c = 3 if (u < 0.5): beta = (2 * u)**((n_c + 1)**-1) else: beta = ((2*(1-u))**-1)**((n_c + 1)**-1) parent1 = np.array(parent1) parent2 = np.array(parent2) child1 = 0.5*((1 + beta) * parent1 + (1 - beta) * parent2) child2 = 0.5*((1 - beta) * parent1 + (1 + beta) * parent2) return child1, child2
I varied the Distributed Index value depending on the population.
Mutation
Our mutations are probabilistic in nature. For the vector, at every index a mutation is decided to be performed with a probability of
3/11.
I scale the value at an index by randomly choosing a value between (0.9, 1.1) iff the value after scaling is within the valid (-10, 10) range. The following code does that:
for i in range(VECTOR_SIZE): mutation_prob = random.randint(0, 10) if mutation_prob < 3: vary = 1 + random.uniform(-0.05, 0.05) rem = child[i]*vary if abs(rem) <= 10: child[i] = rem
I chose to scale by a value close to 1 as random mutations such as setting an index to any value between (-10, 10) was not working well. I theorize this is because the overfit vector is close to good results but has just overfit on the training data. Thus, tweaking it slightly gives us improved results.
However, I did change these mutations as per the trend of the previous populations. If I observed the errors were not reducing significantly over generations, I increased the mutations to a scaling factor between (0.9, 1.1) and even (0.3, 1.7) and other variations in tthe middle. I would experimentally observe which helped us get out of a local minima.
Sometimes, I even decreased the mutations futher, to reach a finer granularity of our genes. I did this when I was confident I had good vectors that needed more fine tuning. I set the scaling factor between (0.997, 0.1003).
I also applied other heuristics to it that you can read here
Hyperparameters
Population size
The
POPULATION_SIZE parameter is set to 30.
We initially started out with
POPULATION_SIZE = 100 as we wanted to have sufficient variations in our code. But, we realized that was slowing down the GA and wasting a lot of requests as most of the vectors in a population went unusued. We applied trial and error and found 30 to be the optimal population size where the diversity of population was still maintained.
Mating pool size
The
MATING_POOL_SIZE variable is changed between values of 10 and 20.
We sort the parents by the fitness value and choose the top X (where X varies between 10 to 20 as we set it) that are selected for the mating pool.
In case we get lucky or we observe only a few vectors of the population have a good fitness value, we decrease the mating pool size so that we can limit our children to be formed from these high fitness chromosomes. Also, when we observer that our GA is performing well and do not want unnecessary variations, we limit the mating pool size.
When we find our vectors to be stagnating, we increase the mating pool size so that more variations are included in the population as the set of possible parents increases.
Number of parents passed down to new generation
We varied this variable from 5 to 15.
We kept the value small when we were just starting out and were reliant on more variations in the children to get out of the overfit. We did not want to forcefully bring down more parents as that would waste a considerable size of the new population.
When we were unsure of how our mutations and crossover were performing or when we would change their parameters, we would increase this variable to 15. We did this so that even if things go wrong, our good vectors are still retained in the new generations. This would save us the labour of manually deleting generations in case things go wrong as if there is no improvement by our changes, the best parents from above generations would still be retained and used for mating.
Distribution index (Crossover point)
This parameter was applied in the
Simulated Binary Crossover. It determines how far children go from parents. The greater its value the closer the children are to parents. It varies from
2 to 5.
We changed the Distribution Index value depending on our need. When we felt our vectors were stagnating and needed variation, we changed the value to 2, so that the children would have significant variations from their parents. When we saw the errors decreasing steadily, we kept the index as 5 so that children would be similar to the parent population, and not too far away.
Mutation Range
We varied our mutation range drastically throughout the assignment.
We made the variation as little as between factors of (0.997, 1.003) for when we had to fine tune our vectors. We did this when we were confident we had a good vector and excessive mutations were not helping. So we tried mall variations, to get its best features.
When our vectors would stagnate and reach a local minima - we would mutate extensively to get out of the minima. The factor for multiplication could vary anywhere from (0.9, 1.1) to (0.3, 1.7).
We would even assign random numbers at times to make drastic changes when no improvement was shown by the vectors. We did this initially when we first ran the 0 vector and it helped us achieve good results.
Exact details as to how we did this can be found in the Mutation section.
Number of iterations to converge
Approach 1
It took 250 generations to converge when we tried with the overfit vector. The error reduced to 1.31 million total. However, we restarted after reinitializing the initial vectors as all 0s because we could not get out of this local minima.
Approach 2 (after restart)
It took 130 generations to converge. Generation 130 has validation error as 225-230K and train error around 245K for most of the vectors.
At this point the GA had converged and we had to do fine tuning to reduce the error further. It decreased very slowly after this point.
The error decreased very slowly after this point and we could finally brought the validation error down to 210K and train error to 239K.
Heuristics
Initial vector: After almost 11 days of the assignment, our train and validation error were still at ~600K each. Initializing all genes to 0 (reason described above) reduced each error to about 300K. This was the most important heuristic we applied.
Probabilistic mutation: Earlier we were mutating on one index only. But we changed our code to mutate each index with a probability of
3/11, this brought more variation in the genes and worked well for our populations.
Varying mutations for different indices : We noticed as we ran our GA that the values at index 0 and 1 seemed to vary drastically, ranging much beyond what was given in the overfit vector (unlike other indices). Hence, we increased the mutations at both of these indices so as to obtain a larger search space for these. Also during our initial runs with the 0 vector and trial and error, we saw indices 5 to 11 were taking on very low values. So we kept their scaling factor between (0,1) to allow for more variations. Indices 1 to 4 had larger values in the overfit vector so we kept their scaling factor to values close to 1. These were just some modifications we kept experimenting with as we ran our code.
Variations in fitness function, mating pool size, population size are also heuristics that we applied as the algorithm and the code could not detect when these changes were required. We had to manually study our population and see the impact of these variations and accordingly modify them.
Trace
trace.json contains the trace of the output for 10 generations.
The format is as follows,
{ "Trace":[ "Generation": <Generation number>, "Population":[[]], "Details":[ "Child Number": <Index of child>, "Parent One:": <First parent>, "Parent Two:": <Second parent> "After Crossover": <Child generated by crossover>, "After Mutation": <Child after mutation> ]] }
As 8 parents are brought down to new generation, the last 8 children (when sorted by fitness) are not included in the new population for the next generation.
Potential vector
Generation: 185,
Vector: [
0.0,
0.10185624018224995,
0.004818839766729392,
0.04465639285456346,
-2.987380627931603e-10,
3.817368728403525e-06,
1.2630601687494884e-12,
-7.311457739334194e-09,
-2.168308617195888e-12,
3.5200888153516045e-12,
1.4159573224642667e-15
]
Train Error: 245541.30284140716,
Validation Error: 214556.20713485157,
Fitness: 410989.24940797733
I believe this could be the vector on the test server as this is a vector from one of our last populations and has the best validation error I ever saw. It also has a low train error meaning it generalizes well. It is one of the vectors I submitted on the day I got the improved result.
With this article at OpenGenus, you must have a good idea of Genetic Algorithms. Enjoy. | https://iq.opengenus.org/genetic-algorithm-overfitting/ | CC-MAIN-2021-43 | refinedweb | 2,851 | 54.12 |
User talk:Walter
From OLPC
Please see my talk page on the Sugar Labs wiki.
Request for admin rights...
Hi, I'm Alejandro Sánchez, Asmarin user of this wiki.
I'm system administrator of EL project,, fork of es.wikipedia.
I'm translating most important articles into spanish and sometimes i need to delete bot vandalism and so....
Can you make me admin privileges to clean and maintain the wiki?
I enter every day and can do this "job".
I'm waiting for your reply. ;-D
--Asmarin 11:51, 21 April 2006 (EDT)
Ditto
I'm not around every day, but would be glad to help with page deletion and bot blocking. Sj 04:04, 2 May 2006 (EDT)
Anti-spam measures
You should definitely turn on captchas for anonymous edits; a reason to upgrade to Mediawiki 1.6.x
You might further consider throttling the edit-rate for newbies and anonymous users... there is some hackable functionality for this built into MediaWiki. Sj
Navigation
Can you please add the table of contents to the navigation table/block that appears on the left of every page? This seems to be the most frustrating thing thats missing from some wikkis. For me anyway. :) jkinz
- Excellent idea. I've started. Any suggestions as to what I should include? Walter 17:56, 31 May 2006 (EDT)
Content links
See Template:Content-list; also linked to from the Content ideas page.
Kudos
Kudos for your diplomatic handling of the "safety concerns" of the design of the rabbbit ears. Of all the dangerous things that are faced on a regular basis to the children for which the laptop is designed, I think falling on the rabbit ears of a olpc laptop is a risk they can live with. --HSTutorials 16:29, 25 July 2006 (EDT)
Sj 16:05, 27 June 2006 (EDT)
Walter, Kudos on Build 303. It's very nice. -Jeff 17:44, 13 March 2007 (EDT)
Wikipedia need an image :)
Hi! I'm writing an article about Wikipedia for Polish Press. I wrote something about collaboration olpc with Wikimedia Foudation. That article with all images is on public domain licence. So, I need an image of that laptop :) with tag {{PD}}, ({{PD-self}} - will be the best ;) I found on commonswiki some images, but only with cc. That article will be promotion of pl wiki and olpc too, I think. You will find me here on pl wiki:
and this article is here:
Information about olpc ends it.
Thanks for all what you can do, and sorry for language mistakes ;)
With regards Przykuta 05:14, 14 September 2006 (EDT)
- I can take some photos and put them in the public domain over the next few days, but I am curious as to why CC-A is not adequate for your needs. --Walter 23:06, 14 September 2006 (EDT)
nasa color tool
Hey, check it out!. --Jacobolus 02:39, 1 November 2006 (EST)
Avoiding Spam
Darn it! I hate spam. I've noticed the last wave (I usually check the recent changes page) and was wondering how to make it harder for the buggers... something should be done (or attempted at least).
Is there an easy way to report it? I mean, many don't have the (sensitive) rollback ability, but I would like maybe a simple way to report it so that an administrator could act on it without having to hunt it down. Editing the page to remove the spam instead of doing a rollback is an extra edition that just pollutes the page's history and doesn't contribute anything (except removing the offending spam).
On a more 'aggressive' stance, is it my impression or spammers are more active during weekends? If so (which I haven't really bothered to check) maybe the wiki could be configured so that only people logged in could change pages during weekends? Or do some validation so that no more than a given percentage of the page could be replaced?
An arms race is the last thing we should embark on, but...--Xavi 13:50, 10 December 2006 (EST)
- Maybe we should just force everyone to login to edit pages--not my first choice, but it will slow them down somewhat. I'd be nice if MediaWiki had a rollback for all changes a spammer made in one step: something I might write if I ever have some spare time. --Walter 15:19, 10 December 2006 (EST)
- I'm constantly logged in, so I couldn't care less, but others may/will think differently. Tagging an user/IP as a spammer and semi-automatically reverting all changes would be a good administrator tool. I have no idea how that could be accomplished in WikiMedia (I'll start reading). Still, it would be nice to have a 'report as spam' or similar so that the community could easily flag things for the administrators... BTW, 'spare time' sounds like an oxymoron for you guys... :P --Xavi 15:46, 10 December 2006 (EST)
- How about something as simple as a page which we watch that anyone can post a spam alert to? It could be include a link to the spammer's contribution page, which is the easiest place to do rollbacks from. Maybe SJ has some additional ideas. --Walter 15:54, 10 December 2006 (EST)
Other approaches
How about we call it quits and start funding schools? Just a thought. --Nelson Mandella 01:55, 25 December 2006 (EST)
- I am all for "funding schools", but history and experience has shown that that approach is slow and only reaches a limited number of children. We believe that the laptop program will reach more children faster and these children will not be limited by the resources the schools will be able to offer them. But by all means, you work on helping children you way and we will continue to work on our approach. Let's hope we a re both successful. --Walter 07:09, 25 December 2006 (EST)
Correction to your edit
Your editing was not grammatical I think:
In January 2005 the MIT Media Lab launched a new research initiative to develop a $100 laptop—a technology that could revolutionize how we educate the world's children. To achieve this goal, a new, non-profit association, One Laptop per Child (OLPC), was been created, which is independent of MIT.
should be. --Tonyv 12:30, 29 December 2006 (EST)
Hardware folks
Walter, who is on hardware, specifically power supply/battery/generation? Tom Haws 13:18, 5 January 2007 (EST)
Hebrew translation
There is a Hebrew translation for the laptop.org previous site. Zvi Devir 08:58, 13 March 2007 (EDT)
Bakersdz and Hunter
Users Bakersdz and Hunter are acting really odd. I'm not even 100% sure they aren't bots using the wiki (rather than IRC) for some odd purpose. Nothing they write makes any sense. Perhaps they are the same person. You might want to investigate this a bit. AlbertCahalan 12:58, 15 March 2007 (EDT)
Site copy
Hi, we should add a set of content sections to match the other sections in User:Felice/website_copy.1... and update them in depth along with the rest of the site. Sj talk 19:02, 18 March 2007 (EDT)
New colors
Sounds and looks nice. I'll bolden the original & diff links on the right of the
Template:Translated Page Template:Translation, as they fade a bit.
I like the idea that {{OLPC}} maps to the 'laptop icon color', and that the translations maps to the 'participate icon color' (nice touch for the community translators :) Something could be thought out for the remaining icon colors... new 'vision' template? And 'content' with the XO child color? ;) --Xavi 10:39, 21 March 2007 (EDT)
- Yes, very nice. I like the idea of magenta for 'content'... perhaps we could tag the visionary and inspirational pages with orange, I wonder if we could deal with so much of it in a broad bar.
Walter: we might also change the default blue and orange in mediawiki to match those colors (for links and for active tabs along the top)... even the 'you have new messages' orange could change. Sj talk 10:44, 21 March 2007 (EDT)
base for POs
Currently the pages PO-laptop.org-top-level-en-US and such are being redirected to User:Felice/website copy top level... maybe they should be 'swapped'? Have no idea what happens when moving unto an existing page... will do some tests elsewhere and see. --Xavi 17:36, 1 April 2007 (EDT)
translation of news
Hi! I just finished translating the 'working news' section of the News page into spanish News/lang-es—I wanted to hook that up before jumping on the rest of it. But, when I was going to add the Translations template to the english page I realized its protected status...
In itself (the protection) is not much of an issue as the News/translations page is handled outside of it—so other translators can add their versions without hassle. My doubt / question is if that page (News) is of interest to be translated, and if so, what (if any) protection should be done in the translation(s)... --Xavi 16:07, 7 April 2007 (EDT)
- I am not sure we need ot keep it protected. Probably just the {{OLPC}} tag at the top is enough...
Latest Release Template
Congrats! You managed to edit Template:Latest Releases' blob! :) I was thinking that the blob is way too nasty for easy maintenance, and was trying to find a way to make the updates simpler (instead of dealing with the blob itself). One alternative is to have other templates (but the updating could turn into a treasure hunt amongst nesting templates). Another alternative is to have simple/plain subpages with the links. Example:
Template:Latest Releases/stable [] Template:Latest Releases/livecd [] Template:Latest Releases/firmware [[OLPC Firmware q2c14]]
And add a couple of edit links in the template page and maybe in the rendered template (ie: a +/- link like for translations).
Or should I revert the whole thing and forget about the template? --Xavi 17:14, 27 May 2007 (EDT)
OLPC Nigeria/Galadima
Oops.. .sorry... didn't see you were editing Galadima... I was about to add the {{Schools-nav}} template... --Xavi 13:08, 10 June 2007 (EDT)
Suggest wiki sidebar item "Learning parables" be replaced with Educators
The wiki sidebar section "about the laptops" currently has an item "Learning parables" linked to Learning Learning. Perhaps this was once a major article for the wiki. But now, given the other and growing content, it seems an inappropriate link. It gives an impression of site immaturity. I've added a link to the page from Constructionist. I suggest it be removed from the sidebar.
The sidebar doesn't currently have a clear entry point for teachers. Content comes closest. I suggest a link to Educators be added. This would help visiting teachers get oriented and involved, by providing easy access to a page tailored to their interests. There is a link "content" to Educators in Home. But there is a lot of text above it, and not everyone keeps reading a page when earlier material isn't what they are looking for. And it's easier to tell someone "see educators on the sidebar" than "enter 'educators' in search", or "look for a link labeled 'content' on the top page" (hmm, the last has the additional difficulty that there are two "content" links to different pages). "For educators" might be nice link text.
Feel free to remove this section once you've seen it. MitchellNCharity 08:32, 16 June 2007 (EDT)
Dear Walter
Hi.
May I know roughly when it is possible for non-programmer volunteers like me to translate SUGAR UI and other key Activities into Korean?
And, a Korean Squeak user named Kim Seung-Beum (김 승범) has translated Squeak and Daemon's Castle into Korean. Could you let me know how to include Korean version of Squeak, Etoy, and Daemon's Castle into XO laptop? sincerelyphp5 10:46, 16 June 2007 (EDT)
- I don't know much about eToys localization, but I am sure the eToys community can help you. We are just in the process of sorting out the details re internationalization details. I'll post things as soon as I know. --Walter 16:21, 16 June 2007 (EDT)
- Dear Walter. From several months ago, I have planned an OLPC presentation tour in Korea to meet various teachers, professors, governmental officers, civil organizations, and so on..please persuade OLPC people send me THE 10(ten) B4 machines which I have long long long waited. I hope your help. php5 21:49, 2 July 2007 (EDT)
Jejudo Local Government in Korea, one of 16 local governments in Korea
Its Office of Education has shown very strong interest in the OLPC project during an unofficial meeting last week, and they surely hoped to participate in. I believe the other 15 local governments in Korea will show the same reaction as it.
I'll meet a few officers of Ministry of Education of Korea, and some political parties Friday this week, and am sure there will be more than 1 million orders from the Korean government at this very moment. However, as I am just an OLPC volunteer, I strongly recommend OLPC to contact the Korean government directly.
BTW, because currently only a few Korean people have ever heard about OLPC, I'm preparing a few news articles and XO laptops reviews in collaboration with some renowned Korean newspapers and broadcasting channels. Those articles will be publicized or broadcasted next week. What do you think about including Korea as a pilot country, green zone
Please let me know any help you need for the OLPC project; a few volunteers including me are fully prepared to any requests from OLPC php5 23:17, 23 July 2007 (EDT)
- Dear
- Yesterday, I met a few governmental officers in Ministry of Education, Korea. Their reaction to my B3 machine was What a supprise!!. They promised to deploy XO laptops for Korean children, but currently all their educational plans are based on MS Windows. So they need some time to adjust their plans, and worried about possible maintenance problems; repairing XOs, network management, and server architectures etc. There are about 4,150,000 primary school students in Korea.
- Seoul, the capital of Korea, is an hour distance from my home town via airline, Jejudo Island. So, I can contact them any time without any burden. Today, I'll meet some renowned journals in Korea to introduce them XO laptops. Though currently only a few people have ever heard about OLPC, millions of Korean people will become acquainted with OLPC exactly in one or two weeks. Next week, I will visit 16 local Office of Educations in Korea one by one to explain what OLPC project is as much as, and as exactly as I understand.
- I'm sure that, if OLPC contacts Korean government (Ministry of Education) directly, there must be more than 4,000,000 laptop order without any hesitation, and it must be helpful for OLPC initial production.
- Contact to Ministry of Education: Lim Kwang-Bin (General Officer of Educational Adminstration) kbim@moe.go.kr
- Please let me know if you need additional information or request for me to do. Sincrely yours php5 19:24, 26 July 2007 (EDT)
Russian translation of laptop.org and country status
Hi! Translated laptop.org to Russian, would it be possible to have a preview of translated site to check correctness in context? Also, was there a reason to mark Russia green in wiki? Maxim 16:08, 28 July 2007 (EDT)
Hi Walter! Please update Russian translation of laptop.org with latest wiki. Also I notice that points to - could you change it to? In general, languages page seems to be inconsistent with English version...
Hello again! should be updated from I guess. Maxim 01:44, 29 September 2007 (EDT)
Hello Walter! Please update Russian translation of laptop.org from wiki PO files. Thanks! Maxim 05:49, 28 October 2007 (EDT)
- Looks Ok. One minor - from link to givemany.shtml points to English version of page, should point to Maxim 05:34, 29 October 2007 (EDT)
Hi Walter. Please update the Russian translation of the site from I have done some minor edits and added strings on G1 G1. Maxim has approved the changes 8-) - --Biarm 16:18, 20 November 2007 (EST)
Hi, a new version of is ready. Please update - --Biarm 15:24, 21 November 2007 (EST)
Hi, is updated and ready for upload - --Biarm 11:55, 22 November 2007 (EST)
Hi, files are updated and ready for upload. That completes the new update of the Russian version of the Web site 8-) --Biarm 13:00, 23 November 2007 (EST)
Image:Galadima4.jpg and Image:Galadima3.jpg
Why are these so tiny? Don't people make normal-sized drawings? AlbertCahalan 12:00, 13 August 2007 (EDT)
RE: XO: The Children's Machine
follow-up to User talk:Xavi#XO: The Children's Machine
- I didn't find anything wrong with how things were, you seem to have managed the moving just fine. Maybe I missed some of your edits due to the vandal...
- I totally agree that it's a bit messy when moving a page that has translations attached as sub-pages (as the move only takes care of the talk page). But regardless of being sub-page or full-page, they all had to be tweaked to point to the new 'source' page... coincidentally I had been playing in the Template:Sandbox to avoid the need to declare the source (as it would be the 'super' or 'upper' page), in which case having the translations as sub-pages is an advantage.
- A page I've found helpful in these situations is the Special:Allpages to actually review all pages in a branch. Another handy page is the Special:DoubleRedirects. Cheers! :) Xavi 20:27, 26 August 2007 (EDT)
letters of intent
Youth_Energy_Initiative wants a non-binding letter of intent with us re: educating the youth of the world about how to be green. We should have a standard letter that we can send out once we agree to share focus with an outreach or creative group. Sj talk 14:11, 12 September 2007 (EDT)
In xogiving.org/faq.html, a questionable answer to "What can a $1,000 laptop computer do that the XO Laptop cannot?"
Hi. currently says:
What can a $1,000 laptop computer do that the XO Laptop cannot? Not much. In fact, the XO laptop can do many things that a $1000 dollar laptop can not...
I strongly suggest this needs to change.
A random ~$900 laptop [1] is >3x faster, with 4x the memory, 120x the disk, and vastly faster graphics. What can a $1,000 laptop computer do that the XO Laptop cannot? A very great many things.
Perhaps what was intended was something like "the things we think important, and thus designed the xo to do, are done by the xo as well as a $1000 laptop can do them". Or at least will, when we get more time to tune and optimize. Though even that's not true, as many things we think important (eg, touch screen, more power), and would have liked to include, and can be done by some ~$1k laptop, just didn't fit the mission envelope and price point. And regardless, that's not what the answer currently says.
What can a $1000 laptop do that my digital watch cannot? Or the rock I use as a door stop? Not much? That's true in some sense, but it's certainly not a scientifically honest response, and perhaps not even a legally honest one.
Even with our very best efforts at expectation management, explaining what the XO is not, there will be people who feel they made a mistake, or that we have misled them, and not provided what we implied. Here, we are not even trying to communicate an accurate picture.
Perhaps something like: ...on the market today. That comes at a cost. The XO is less powerful than laptops have been for several years. It is more like a powerful handheld. It will not run normal PC software. And its innovative software is unfinished, and is still being written. But we think what it does offer, at $200, is worth this sacrifice.
It's one thing to picture geeks understanding what this is, and being able to afford it. But my bus driver today was wondering if she should invest in one for her daughter. I suspect such people may far out number the geeks. And at present, I suggest we are doing them a disservice. MitchellNCharity 00:29, 29 September 2007 (EDT)
VMWare todos
Talk:Emulating_the_XO#VMWare_TODO Sj talk 16:48, 11 October 2007 (EDT)
Pootle & PO-laptop
We are about to start final integration & testing of Pootle and although the drive is geared for activities, I was wondering what do you think of switching PO-laptop.org files also to it. --Xavi 09:56, 16 October 2007 (EDT)
- I've never used poodle, but it sounds like a good idea. But there is no rush, as there is so much else on everyone's plate right now. --Walter 10:23, 16 October 2007 (EDT)
xogiving feedback
Just to make sure someone actually sees this, I drop it here. Feel free to garbage collect once seen. MitchellNCharity 13:35, 18 October 2007 (EDT)
rollback
yes... I've been stung by that critter more than once: rollback is just one edition (no matter what amount of diffs you are comparing). Would that be a bug in mediawiki? this wiki? or a feature? --Xavi 13:57, 18 October 2007 (EDT)
- If only we didn't need to use it so often!! I was in a hurry and I always have fat thumbs. --Walter 17:00, 18 October 2007 (EDT)
Inaccessible link in 2007-10-20 community news
The 2007-10-20 community news says "By going to [1] you will see all of the peripheral ideas and are welcome to post any new ideas you may have.", with a link of . Howerver, that link yields "Login Required", and after a delay, bounces you to "Main Page - TeamWiki". This is perhaps not what was intended. MitchellNCharity 21:09, 21 October 2007 (EDT)
- Please go to the public peripherals page.
wikinews questions for you...
User_talk:Brianmc#Nicholas_Negroponte_-_Questions_about_OLPC Sj talk 09:17, 2 November 2007 (EDT)
dialog design
file open dialog comparison from zdenek. Sj talk
Implementation of Website Translation
Hi Walter, I have been working on translating laptop.org into Bulgarian. I am almost finished now and am interested in the process for taking the translation "live". Here are the current files. There is one to go - the more technical "laptop" list. Regards, --Cryout 10:28, 5 November 2007 (EST)
- I'll put the site up tomorrow and once you sign off, I'll link it to the main site. Thanks --Walter 20:41, 5 November 2007 (EST)
Front page on Wikinews
Hi Walter! As a prelude to hopefully interviewing Mr Negroponte Wikinews was covering the recent developments in the project, see right now for a story on your project as the main lead. I don't know where on the laptop.org wiki to promote something like this, the dream-come-true would be actually get one of the teachers from the Indian pilot to comment on our have your say pages. If you can also pass on to Carla that she is featured/heavily quoted in this piece I would appreciate it. I know Wikinews is supposed to be neutral, but it relies on contributors writing about what they're interested in. OLPC has enough interested people to have great coverage. --Brianmc 16:33, 8 November 2007 (EST)
Request for higher-level review of this wiki's "Ask a Question" page
Wonderful news these days. Now that shipping is a reality, I was wondering if you could direct some attention to Ask_OLPC_a_Question. While it's true that many folks affiliated with OLPC have been helping to maintain the page and answer questions, such as SJ and James Cameron, and I've even answered such questions as I could, there are a lot of questions that have just been left flapping in the breeze. In many cases this seems to be because only you and the other OLPC senior staff would be in a position to make an authoritative answer. Thanks, ~ Hexagonal 09:57, 11 December 2007 (EST)
- Yoiks!!! My beautiful include scheme - didn't last. My idea was to use #noinclude on the subpages to hide 90% of the answered questions from the main page. That part of my idea was actually pretty easy to work, it just took some periodic edits of the subpages. The problem was a mediawiki bug that made the 'edit' link send you to the wrong subsection of the included pages... with some time debugging it would have been easy to find a workaround...
- Now that the page is not in the sidebar anymore, it may not need such an involved structure. However, I think that for whatever page is intended the main interface for incoming questions - whether it's this one or another one - some structure like this is needed. Your solution is IMO both too draconian - in that it relegates past questions to unreachable hidden subpages - and too shortsighted - in that the page (or whatever the main questions page is now that the sidebar doesn't include it) will soon devolve again into a short foreword followed by a huge pile of uncategorized questions.
- I would be willing to try to fix the include scheme up better, if you want me to. That includes: restoring the structure; making sure that the right things are shown and hidden in each section, without recategorizing between sections; finding a workaround for the misaimed 'edit' links; and rewording the wikisource comments to try to make things more maintainable, with maybe even a short 'how to clean up this page' comment on the talk page. But it's your call, you're obviously in charge and anyway I'm not going to get into an edit war over my attempts at cleanup. So let me know. Homunq 13:08, 28 December 2007 (EST)
- I didn't destroy your beautiful scheme; I just hid it by removing the {{: }} links from the Ask OLPC a Question page. But the schema was not scaling: the page had become impossible to navigate. I think that a scaled back version (hiding 99%) would help. Also, I made some more fine-grained categories to reflect the needs of XO owners, now that non-developers have machines. (I tried to follow your structure, but I haven't edited the pages themselves yet to create the /summary pages as per your structure. As far as removing it from the sidebar, I am not sure who did it, but I would argue we should restore it. Maybe we can talk through a plan? --Walter 13:18, 28 December 2007 (EST)
- I'm on vacation with my family this week, so I can make time anytime, but the best time would be afternoons when my daughter is napping. How about 3 PM mountain (GMT-7) (I'm in Nayarit)? Plus or minus an hour. You name the day and the channel, and use the wiki to email me.
- I didn't say it was you who destroyed my scheme - I suspect you never even saw it in its glory, when there were only a few questions in each section. I had the noincludes spanning much broader swaths of the subsections, I don't know who changed that. But I suspect whoever did it didn't even understand exactly what they were doing... which is why we need clearer instructions. And as for: the /summary pages, that was somebody before me, I think it was xavi; the sidebar, I agree that the 'help using' and 'support' links should be above the FAQ, but I think the FAQ belongs somewhere there. Anyway, we'll hash it out later. Homunq 21:09, 1 January 2008 (EST)
- Sorry I missed your message. I suspect it may be too late for now - I'm going to be in transit back home from tomorrow until Wednesday. After that, I will get on skype - I'm Roqge.Chqema.e.Ixcqhel (remove the q's). ps. if you want to IRC tonight late, I may pull an all-nighter. Homunq 17:37, 5 January 2008 (EST)
- Where, when? #olpc or #sugar or what? Use the send email link on this wiki to contact me. Homunq 22:43, 5 January 2008 (EST)
View: Friends or Group, Mesh or Neighborhood ?
Walter,
I've been discovering a confusion of function key names throughout laptop.org and laplop.org/wiki documentation.
For example,
The first key (aka F1), is sometimes referred as 'Mesh View' or 'Neighborhood View'. (See or)
The second key (aka F2), is sometimes referred as 'Friends View' or 'Group View'. (see or )
Is there a transition going on from the former to the later? Or is one the 'technical' label, and the other a 'user' label?
If I see one, as I am editing wiki pages, should I change it to the other? Thanks, --ixo 00:35, 5 January 2008 (EST)
- We have not been very consistent here. I think the consensus is that we should be using the terms Neighborhood and Group. I thought I caught them all on the laptop.org site--apparently not--and I've only just begun sorting it out in the wiki. Please do make changes in the wiki if you find them. Thanks. --Walter 08:37, 5 January 2008 (EST)
Updating wiki pages, any priorities?
Walter, I'm quickly absorbing the OLPC wiki site, and now watching about 120 pages. Contributing were I can, and as I do research into questions asked.
After seeing your note above, do you have any other preference or priority of areas which I could be most helpful in updating and/or maintaining? Thanks, --IainD 13:24, 5 January 2008 (EST)
- I think the user-facing pages need the most TLC. We've until recently been completely focused on supporting the developer community needs. Maybe contact Adam Holt for his suggestions as well? Thanks --Walter 13:48, 5 January 2008 (EST)
G1G1 Canada updates?
Hi Walter, I appreciate all you've done to communicate the problems with the G1G1 program, but Canadian donors are still being left in the dark here. Are there any updates on how or why there has been such an extended delay in shipping to Canada?
Thanks! Pouderkirk 18:04, 7 January 2008 (EST)
- Thanks very much for responding so quickly, I look forward to your news. -- Pouderkirk 20:15, 7 January 2008 (EST)
Wiki News
Walter,
I've been overwhelmed trying to fish through the Wiki Recent Changes page, and trying to keep abreast about what's new or updated in OLPC wiki.
So I was brainstorming about an idea.
- OLPC:Wiki_News - The Wiki news and summaries of significant changes and updates.
What do you think, would people use it, if it was kept fairly fresh? --IainD 18:32, 12 January 2008 (EST)
- Nice idea. I could certainly summarize the changes I track; I'll encourage others to do the same. --Walter 19:24, 12 January 2008 (EST)
File:Holy Quran.pdf
Any reason why this is here? ffm 23:43, 19 January 2008 (EST)
- It was requested by someone in Pakistan as a compelling ebook demonstration. --Walter 01:52, 20 January 2008 (EST)
- Any problem if I move it to File:Quran.pdf? ffm 12:34, 20 January 2008 (EST)
- No problem. Maybe it should be in the media namespace as well. --Walter 19:00, 20 January 2008 (EST)
- Moved to File:Quran.pdf. Wait, we have a media namespace? ffm 10:18, 21 January 2008 (EST)
- SJ moved the xo bundles to the Media namespace, but I don't know where it lives: it doesn't show up in the All Pages list. --Walter 10:26, 21 January 2008 (EST)
- Media: redirects to the Image: namespace. This is a hack - a 'pseudonamespace', used mainly to let people say Media:Quran.pdf to insert a link-to-media without attempting to transclude the media in question and without adding an extra : at the start of the wikilink. (The hack persists perhaps because it should be "Media" in that it is used for all sorts of files and media, and "Image" hangs on for historical reasons) --Sj leave me a message 12:02, 21 January 2008 (EST)
MAss XO group
Walter, at our last meeting you had mentioned the possibility of the Mass XO Users Group meeting at 1CC (after business hours). It looks like we are meeting on the 26th. Is the invitation still open?
--AuntiMame 13:16, 18 Feb 2008 (EST)
- That should work. There is a chance I will be in Peru, but likely I will be around. If I cannot be there, I'll find someone else to host. What time do you meet and how many people do you expect? --Walter 15:11, 18 February 2008 (EST)
Fixes on PO-laptop.org-gettingstarted-es
Hi Walter i did some small fixes on this .PO (typos)
laptop.org page updates ?
Walter,
I stumbled across
from wiki page 'Map'.
Doesn't quite fit the laptop.org site layout standards, needs a little updating... :)
--ixo 22:13, 2 March 2008 (EST)
- Yeah. I need to update the maps in the wiki and laptop.org sites first and foremost... Dropped the ball on that one. What we really need is a Google Map showing where the laptops are being deployed. CScott has the beginnings of such a display. --Walter 22:24, 2 March 2008 (EST)
Libros de Perú
v.11 of this bundle is the one that you want. All collections, both the monolithic one and the individual ones, now include both doc and html forms, so you can choose whether you want to skim a book yourself quickly or share it. I need to test it on the build you have; currently testing other localization bits. --Sj talk 17:48, 4 March 2008 (EST)
Deployment Guide
Per your invitation on the main page I did a read-through and mark-up on the deployment guide. Overall, it should be very useful to capture your lessons-to-date. All edits are well-intentioned, but revert as you see fit. Cjl 18:36, 21 March 2008 (EDT)
Collections
Model bundles should all provide a few canonical things : icon, help, screenshot, metadata file. At some point I'll add these into the default display template for project bundles, to give a more immediate sense of what it is like to use. --Sj talk 14:47, 24 March 2008 (EDT)
Elements
This is going to turn into something lovely. --Sj talk 16:09, 24 March 2008 (EDT)
Fun pilot deployment news
Neat photos: the Illinois School for the Deaf and their XO pilot (in Jacksonville, IL) was featured on myjournalcourier - read the article and see some photos of the children with their laptops. Also, OLPC_Chicago/IL_Children's_Low_Cost_Laptop_Fund has the OLPC Chicago community insanely excited. Mchua 16:20, 25 March 2008 (EDT)
Devanagari/Nepali
Apologies for the mis-tagging of the Devangari keyboard, CharlesMerriam and I have been discussing the nomenclature/categorization of keyboard pages over on his talk page. Please chime in if you care to. Cjl 18:05, 6 April 2008 (EDT)
- No problem. Obviously, in retrospect, it was confusing, so your efforts to disambiguate are very much appreciated. --Walter 07:16, 7 April 2008 (EDT)
I've put up what I hope is a very clear and specific proposal on keyboard page naming here. There are a few of the specific change suggestions that I'm sure you could improve on (or clarify), but it's a start. Cjl 01:53, 11 April 2008 (EDT)
HP
An AP article for your news round-up. HP unveils small laptop for schoolkids Cjl 13:25, 8 April 2008 (EDT)
X++
If there is interest to use the X++ compiler I would try to make a version available for OLPC. A problem is that I don't like to put it under an open source license. Is it conceivable that the OLPC project might support a commercial distribution for the benefit of Unicef? (I'd like to try that anyway) --fasten 13:08, 22 July 2009 (UTC)
- I am no longer at OLPC, so I cannot speak for them. Perhaps talk to Chuck Kane (chuck AT laptop DOT org). --18.85.22.201 13:18, 22 July 2009 (UTC)
- Let me rephrase that: Is it conceivable that the Sugarlabs project might support a commercial distribution for the benefit of Unicef? I might be interested to sell a modified Sugar on QNX with added Java support in Germany (e.g. to or through klett.de, cornelsen.de and/or westermann.de). --fasten 14:37, 22 July 2009 (UTC)
- Sugar Labs would consider the inclusion of LGPL code certainly (or perhaps some other commercially compatible FOSS license. There is no restriction on your selling Sugar or modifying it, as long as you are abiding by the GPL. You can add things on top of Sugar as you wish. --209.6.218.51 20:25, 22 July 2009 (UTC)
bizarre
I can't figure why you're being targeted by this aggressive vandal. Seems to be young [though it seems like a single person for some time]. We haven't had any big edit wars here, and Hunter was the only partial troll, who never seemed... like a bot user. We just improved our group permissions settings and use of ConfirmEdit; that should minimize this particular attack. --Sj talk
Happy Boxing Day!
That too :-) --Sj talk 14:58, 26 December 2009 (UTC) | http://wiki.laptop.org/go/User_talk:Walter | CC-MAIN-2015-40 | refinedweb | 6,352 | 72.46 |
Basis for the code:
Using a .txt file with names, grades (assignments & tests), and attendance, determine average (then pass/fail), and determine pass/fail based on attendance.
Anything under a 60 = fail.
Attendance under 30=fail.
Here's what I have:
Code:
//This program averages grades and determines pass/fail.
//It also determines pass/fail through attendance
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
ifstream inFile;
int first, last, a1, a2, a3, a4, t1, t2, attendance;
double test, assignment, average;
inFile.open("d:\\grades.txt"); //Using d: because I have no a: drive
cout <<"Reading Data from file."<<endl;
//Average first set of grades
inFile >> first;
inFile >> last;
inFile >> a1;
inFile >> a2;
inFile >> a3;
inFile >> a4;
inFile >> t1;
inFile >> t2;
inFile >> attendance;
assignment = (a1+a2+a3+a4)/4;
test = (t1+t2)/2;
average = (assignment + test)/2;
cout <<first<<last<<"average is: "<<average<<endl;
if (average < 60)
cout<<"\nStudent has failed to meet grade requirements.";
if (average >= 60)
cout<<"\nStudent has passed the grade requirements.";
if (attendance < 30)
cout<<"\nStudent has failed due to poor attendance.";
if (attendance >=30)
cout<<"\nStudent has passed the attendance requirement.";
//Close the file
inFile.close();
cout<<"\nDone.\n";
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/76297-noob-need-little-help-opening-file-manipulating-data-printable-thread.html | CC-MAIN-2015-48 | refinedweb | 197 | 58.28 |
> A suggestion for the JavaDoc-generated documentation: Use JavaDoc's > "-use" option. It generates cross-reference pages that can be quite > useful. I updated CVS to use that, though the published javadoc hasn't yet been updated. Even though some of the other doclets can't handle that flag, that doesn't seem like a reason not to have the widely-used HTML version leverage that feature ... :) FYI, the SAX sources are now mostly in sync with the latest at the site, which includes many javadoc tweaks as well as a few behavioral fixes. (A few namespace bugs fixed, and unused memory is more aggressively made GC-able.) - Dave | https://lists.gnu.org/archive/html/classpathx-xml/2001-08/msg00002.html | CC-MAIN-2022-21 | refinedweb | 109 | 62.88 |
Hi all, The are two large changes that are coming to the public WebGL implementations fairly soon. They involve enabling mandatory GLSL ES shader validation, and a change to the texImage2D API that takes a DOM object for data. The bad news is that these changes will likely break all existing WebGL content. The good news is that you can test code with these changes today, and that it's possible to write code such that it will work both before and after the changes. == GLSL ES Shader Validation == The first change, mandatory GLSL ES shader validation. Thanks to Google & Transgaming's efforts, the ANGLE project now implements a full GLSL ES shader parser, and performs the necessary translation to desktop GLSL. The three public implementations are all using the same shader valiator, and it's already been integrated in such a way that it can be enabled for testing. To do so: In Minefield/Firefox: - open about:config - change "webgl.shader_validator" from false to true In Chromium: - pass --enable-glsl-translator on the command line In WebKit: - instructions coming soon :) The biggest impact of this change is that fragment shaders will have a mandatory precision specifier, as per the GLSL ES spec. However, the precision specifier is invalid in desktop GLSL without a #version pragma. So, a fix that will enable fragment shaders to work both with and without validation is to place: #ifdef GL_ES precision highp float; #endif at the start of every fragment shader. The GL_ES define is not present on desktop, so that line is ignored; however it is present when running under validation, because the translator implements GLSL ES. Note that the precision qualifiers will have no effect on the desktop (I believe they're just ignored by the translator), but may have an impact on mobile. Note that the #ifdefs should be considered temporary here, as the correct/valid shader must include a precision qualifier. If you wish you can also explicitly specify qualifiers on every float variable. Another common problem is implicit type casting, as no such feature exists in GLSL ES. Most commonly, floating point constants must be specified as floating point. For example, the following will fail: float a = 1.0 * 2; vec3 v2 = v * 2; The 2 must be written as "2.0" or float(2). The shader validator will also impose additional spec issues that are sometimes ignored or relaxed by drivers. The shader compilation log should provide the necessary information to fix these. Should you run into GLSL ES programs that you believe are correct but that the shader validator is either rejecting or translating incorrectly, please report these! ==. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl@khronos.org. To unsubscribe, send an email to majordomo@khronos.org with the following command in the body of your email: | https://www.khronos.org/webgl/public-mailing-list/archives/1007/msg00034.php | CC-MAIN-2015-48 | refinedweb | 469 | 62.27 |
1 /*2 * Copyright 1999-2004 The Apache Software Foundation */17 18 package org.apache.sandesha.samples;19 20 import java.util.HashMap ;21 import java.util.Map ;22 23 /**24 * This is the service that is used for the interop testing. Two operations, ping and echoString25 * are defined as per the interop scenarios.26 *27 * @author Jaliya Ekanayake28 */29 public class RMSampleService {30 private static Map sequences = new HashMap ();31 32 public String echoString(String text, String sequence) {33 34 if (sequences.get(sequence) != null) {35 text = (String ) sequences.get(sequence) + text;36 sequences.put(sequence, new String (text));37 } else {38 sequences.put(sequence, (new String (text)));39 40 }41 return text;42 }43 44 public void ping(String text) {45 //Just accept the message and do some processing46 }47 }48
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/sandesha/samples/RMSampleService.java.htm | CC-MAIN-2017-30 | refinedweb | 147 | 68.57 |
A full event display calendar for the Quasar framework that has multiple viewing formats.
An event display calendar for the Quasar framework.
Formerly known as Quasar Calendar, Daykeep Calendar for Quasar is a Quasar-flavored Vue.js calendar component.
You can see a demo of the calendar components with event data at:
Daykeep Calendar for Quasar demo
Version 1.0.x of Daykeep Calendar is intended to be used with Quasar Framework v1. For legacy versions of Quasar, you should use v0.3.x of Quasar Calendar.
yarn add @daykeep/calendar-quasar
Add Daykeep Calendar to your .vue page similar to a Quasar component
import { DaykeepCalendar } from '@daykeep/calendar-quasar'
or import individual components
import { DaykeepCalendarMonth, DaykeepCalendarAgenda, DaykeepCalendarMultiDay } from '@daykeep/calendar-quasar'
In your template, you can just put in a calendar viewer using the current date as the start date
Or you can pass in parameters to customizefield will override the timezone.would fire on the global event bus when a calendar event is clicked on.
By default we use our own event detail popup when an event is clicked. You can override this and use your own by doing a few things:
So to implement, be sure to have
prevent-event-detailand
event-refset when you embed a calendar component:
And then somewhere be sure to be listening for a click event on that calendar:
this.$root.$on( 'click-event-MYCALENDAR', function (eventDetailObject) { // do something here } )
Starting with v0.3 we are setting up the framework to allow for editing individual events. By default this functionality is turned off, but you can pass a value of
trueinto the
allow-editingparameter on one of the main calendar components. The functionality if very limited to start but we expect to be adding more features in the near future.
When an event is edited, a global event bus message in the format of
update-event-MYCALENDARis sent with the updated event information as the payload. You can listen for this to trigger a call to whatever API you are using for calendar communication. Right now when an update is detected the passed in
eventArrayarray is updated and the array is parsed again.
Only a subset of fields are currently editable:
The
DaykeepCalendarMonthcomponent triggers a "click-day-{eventRef}" event when a calendar cell is clicked. The event data is an object describing the day, with a
day,
month, and
yearproperty each set to the appropriate value for the selected day.
So for acomponent with a "MYCALENDAR"
event-ref:
js this.$root.$on( 'click-day-MYCALENDAR', function (day) { // do something here } )
The usable components of
DaykeepCalendar,
DaykeepCalendarMonth,
DaykeepCalendarMultiDayand
DaykeepCalendarAgendashare the following properties:
| Vue Property | Type | Description | | --- | --- | --- | |
start-date| JavaScript Date or Luxon DateTime | A JavaScript Date or Luxon DateTime object that passes in a starting display date for the calendar to display. | |
sunday-first-day-of-week| Boolean | If true this will force month and week calendars to start on a Sunday instead of the standard Monday. | |
calendar-locale| String | A string setting the locale. We use the Luxon package for this and they describe how to set this here. This will default to the user's system setting. | |
calendar-timezone| String | Manually set the timezone for the calendar. Many strings can be passed in including
UTCor any valid IANA zone. This is better explained here. | |
event-ref| String | Give the calendar component a custom name so that events triggered on the global event bus can be watched. | |
prevent-event-detail| Boolean | Prevent the default event detail popup from appearing when an event is clicked in a calendar. | |
allow-editing| Boolean | Allows for individual events to be edited. See the editing section. | |
render-html| Boolean | Event descriptions render HTML tags and provide a WYSIWYG editor when editing. No HTML validation is performed so be sure to pass the data passed in does not present a security threat. | |
day-display-start-hour| Number| Will scroll to a defined start hour when a day / multi-day component is rendered. Pass in the hour of the day from 0-23, the default being
7. Current has no effect on the
CalendarAgendacomponent. |
In addition, each individual components have the following properties:
| Vue Property | Type | Description | | --- | --- | --- | |
tab-labels| Object | Passing in an object with strings that will override the labels for the different calendar components. Set variables for
month,
week,
threeDay,
dayand
agenda. Eventually we will replace this with language files and will use the
calendar-localesetting. |
| Vue Property | Type | Description | | --- | --- | --- | |
num-days| Number | The number of days the multi-day calendar. A value of
1will change the header to be more appropriate for a single day. | |
nav-days| Number | This is how many days the previous / next navigation buttons will jump. | |
force-start-of-week| Boolean | Default is
false. This is appropriate if you have a week display (7 days) that you want to always start on the first day of the week. | |
day-cell-height| Number | Default is
5. How high in units (units defined below) an hour should be. | |
day-cell-height-unit| String | Default is
rem. When combined with the
day-cell-heightabove, this will determine the CSS-based height of an hour in a day. | |
show-half-hours| Boolean | Default is
false. Show ticks and labels for half hour segments. |
| Vue Property | Type | Description | | --- | --- | --- | |
num-days| Number | The number of days to initially display and also the number of additional days to load up when the user scrolls to the bottom of the agenda. | |
scroll-height| String | Defaults to
200px, this is meant to define the size of the "block" style. | | https://xscode.com/stormseed/daykeep-calendar-quasar | CC-MAIN-2021-21 | refinedweb | 928 | 55.95 |
Red Hat Bugzilla – Bug 102501
(ACPI) Yet another ACPI hang at boot
Last modified: 2013-07-02 22:14:52 EDT
Description of problem:
Booting installer with kernel-BOOT-2.4.21-20.1.2024.2.1.nptl.i386.rpm
on Compaq (HP) W8000 Bios 1.18 (will attach sysreport) I receive the following
error (screenshot will be attached for more verbosity):
ACPI-0183: *** Error: Looking up [\_SB_.PCI0.LPC_.ECP0] in namespace, AE_NOT_FOUND
ACPI-1121: *** Error: , AE_NOT_FOUND
At this point it's done.
The install *does* proceed normally if I use 'boot: linux acpi=off' to start it.
Created attachment 93676 [details]
sysreport output
Created attachment 93677 [details]
Screenshot (via camera)
Please attach the output of 'acpidmp' as well. (Hm, perhaps acpidmp should be
added to sysreport?)
Any chance to get dmesg through serial port?
If I can scare up a null-modem, sure. Should have one around here someplace.
Created attachment 93991 [details]
Booting the installer with serial capture.
Here you go.
Created attachment 93992 [details]
Same machine with acpi=off
For comparison sake, serial capture of -BOOT kernel with acpi=off
If you really need the acpidmp, let me know, as the machine currently is the
box I use day to day with Taroon installed.
Created attachment 94020 [details]
acpidmp output
I found another box to run Taroon, here's the acpidmp from the W8000.
ACPI-0183: *** Error: Looking up [\_SB_.PCI0.LPC_.ECP0] in namespace,
AE_NOT_FOUND
ACPI-1121: *** Error: , AE_NOT_FOUND
but in fact, '\_SB_.PCI0.LPC_.ECP0' is defined in a SSDT.
I investigated the SSDTs of the issue, and found that one SSDT has dependence
with other SSDT. This could be the root reason. According to ACPI spec(p114),
the dependence is wrong, but we can solve it. In fact, I found kernel 2.4.22-
rc2 has already such resolution(the resolution can only solve the situation
that A is depend on B when B is loaded prior to A, it can't work for other
situations) for the problem. can you try the kernel 2.4.22-rc2?
well, DSDT defined '\_SB_.PCI0.LPC', but SSDTs use '\_SB_.PCI0.LPC_'. maybe we
also need to fix the DSDT.
The latest beta kernel-2.4.22-20.1.2024.2.36.nptlsmp boots without adding acpi=off.
Hurrah!
yes,kernel-2.4.22-20.1.2024.2.36.nptlsmp includes what I have said. But I
still think It's not the best solution. If SSDT A depends on SSDT B and B
depends on A, though it's invalid to ACPI spec, what should we do?
Sorry, I can't help you there. I am the reporter of the original problem, and do
not understand the inner workings of ACPI. I do thank you for your assistance,
however. | https://bugzilla.redhat.com/show_bug.cgi?id=102501 | CC-MAIN-2018-22 | refinedweb | 465 | 68.16 |
Can you become an Elden Lord?
Take your first step into the Lands Between, which is a special place where mythical creatures come to live.
Read the Chronicles that reveal the fate of worlds, and then participate in the dungeon quest to accomplish goals, reach goals, and even beat goals!
You’ll have to accept your fate to begin the story of the Lands Between, and take your first step into a world that is only possible in a game.
Special Features
• An Epic Drama
A story written by a legendary author who has previously written a Legend of Mana series.
• An Extensive Adventure
Explore a wide variety of areas and dungeons through a story written by a world-famous author.
• A Multilayered Story
A high-resolution graphics engine, a fun and dynamic UI that makes game-play smooth, and various effects make this a faithful reproduction of a fantasy RPG.
• Many Classes
Vary the character of your character to play differently in accordance with the class.
• Unique Online Play
By “linking” into online play, you will participate in the quest of others.
• Protects Your Data
Your game data will be saved automatically.
■Online Features
• An Errant Mode exclusive to the Elden Ring Free Download Game
Explore with new comrades in a new world, while training and evolving your character.
• An Asynchronous Collaborative Online Play
Play with friends who are not in the same region and who are not playing online. Meet each other and form parties in real time.
• The Adventurer mode can be played with others.
• A Toy Box mode is included.
• You can play using your own account.
■Interface
• Player and Character Info
This is where you can view your current status, summon your demons, and check your inventory.
• Your equipment and accessory items can be viewed as well.
• The Bag of Trinkets
An item that you can carry in battle. You can equip it for a different effect.
• Experience/Ability Points
Character levels and their relevant stats are shown here.
• The character’s equip list
The equip list, which you can browse to equip or unequip equipment.
• Monster Info
Infants, monsters with different HPs, and the strength and weakness of the monsters.
• Status Info
The status of your party, you, the monsters you have recently defeated
Elden Ring Features Key:
• 1 Story Mode with up to 4 hours of gameplay per level
• 1 Main Story that progresses with many escalations and twists, and 1 sub-story featuring various mini events with a variety of actions
• Up to 99 hours of gameplay (3*60)
• Has a distance restriction; one to three players who are on the same server, and up to four players on different servers
• Battle against other online players (in the same server) in online battles, and against enemies from the map in arenas
• Includes online matchmaking, PvP battle and ranking system, as well as PvP rankings, so that even if you battle against the same character, you can still appreciate the high level of game skills
• Dungeons: Local PvP vs Computer Dungeon
• Can be played using the Local Battle feature
• Action RPG of Multiplayer Online battle
• The Local Battle system allows you to enter into vs. matches using your own combat style
• Includes up to 5 players per battle
• Can be played in 3 different ways: Vs. in the lobby, Battle Online and TeamBattle
• Features PvP battles, Ranked battles, TeamBattle, statistics of wins/losses for every battle, and a multiplayer ranking system
• Through the peer-to-peer game-connection feature called the Network, no geographical restrictions on game-play
• You can host Online Battle in your own server and continually release free updates for the game
• Delivers a unique game experience that cannot be realized with online game-play.
• Each online match will end with a unique scenario that rivals the work of a professional scenario writer
• Pet: allows you to fight against other players or monsters from your house
• Assembled map: allows you to fight against other players or monsters in a variety of areas with a designer’s focus on detailed terrain
• All-Cast map: is a large, extraordinarily detailed map that shows the wide, beautiful and desolate world of The Lands Between
• The Wall map function allows you to fight several battles at once and enjoy battles more through the variety of walls
And remember,
Elden Ring Crack + Serial Number Full Torrent Download
## 1/2
“I loved the story, and it was beautifully presented.
The game plays like an interactive movie in a way. Each time the story unfolds, you get thrown into a new environment. Unlike other fantasy RPG’s, you are actively involved in the combat because you actually get to direct your actions through your in-game character. There is never a feeling that you are doing things by mistake, because the game always asks if you are sure you want to do that.
The music enhances the immersive feel of the game, and the character voice acting is great. Both of these factors play a role in the game being such a strong experience.”
## 00
“In this game you get to control your character and set his/her path. The game also has a story with a very entertaining storyline.
In my opinion, this is the best RPG’s available right now.”
## 00
“I found it to be interesting. The weapons and armor were pretty average. I would have liked it if they incorporated abilities other than just attack powers like magic. I enjoyed playing as the character that I controlled.
I haven’t read the entire book yet, but I should because I really enjoyed reading it.
I gave the game a 5 because there were a lot of things I enjoyed about the game and it was quite fun.”
## 00
“Overall, it is very entertaining. It is not an RPG, but I had fun playing the game.
It took me nearly the entire day to finish it. But overall, it is worth it. The art, music, voice acting, graphics, and especially the story make this game worth it.
I would have liked to see more variety in weapons and armor and maybe a bit of variety in the combat system. I did find the map to be kind of odd.
Overall, I recommend this to anyone.”
## 00
“I think this is a great game. It reminds me of something out of a movie.
The characters are very memorable and the gameplay is very well implemented.
I especially liked the story and the graphics and animations.”
## 00
“I can certainly understand why this game is in second place. This game definitely deserves to be a part of our list.
The story is amazing and the characters are exceptionally well developed.
I just wish that the combat system was a bit more complex
bff6bb2d33
Elden Ring Crack + Free (April-2022)
【1】Customize Your Character
Select the character you would like to use:
4 body parts (skin, hair, face, and body).
Character Size (main character size).
Hands (hand shape and size).
Hands (hand shape and color).
Proper and Improved Attire (proper appearance and attire for your character.)
Animations (Swim, Climbing, and Dungeoneering)
String Likeness (Yield, Handshape, and other attribute value by hand shape)
【2】Equip Customizable Weapons
Choose weapons that are appropriate for your character based on your play style:
A sword that cuts and strikes.
A sword with a two-handed grip.
A one-handed sword.
Magic Weapon to use in combat.
A fierce staff to strike enemies with multiple times.
Light to less powerful magic.
Weak to more powerful magic.
【3】Possess An Unparalleled Graphics
Apparels, magic, and the Lands Between are presented in stunning 3D graphics.
【4】Play the Epic Episodes in the World Between
If you start your experience in the Strange Lands and then save your progress, you can play a new episode in the world between by replaying it again.
【5】Battle Monsters in the World Between
Dragon Lord – A powerful monster in the world between that guards a huge stone.
The Green Dragon – An ancient, powerful monster that existed before the world.
A powerful beast – A fierce monster in the world between.
A powerful dragon – A vast dragon in the world between.
【6】The World Between Mystery Puzzle Dungeon
Solve the mysteries of the lands between and many secrets await.
【7】Invite Friends to Share Your Story in the Land Between
Equip the world between to your friends who have not purchased the game.
【8】Discover the Real World Beyond the Lands Between
A world-filling epic story of the mysterious world between, where new secrets await.
【9】Demo of a new experience with multiplayer, connected to the real world
Live the online aspect of the game and explore the unknown world with friends and strangers.
* A representative sample of items will be available at the launch of the game.
* Games can not be transferred to other devices
What’s new in Elden Ring:
An amazing story based on a myth, but they didn’t tell us any of it. Never say Never.An amazing story based on a myth, but they didn’t tell us any of it. Never say Never.
The New Fantasy Action RPG For Android
p.A. Intelligent Controls and Deep Customization
In order to obtain high combo and a score (and to master the game), you have to accurately control the position and the timing of your attacks.
o Extremely wide visual range in all positions
o Strategic work as a whole
• Advanced Content
In addition to learning the game’s basic system, you can enjoy the game’s content, such as the skill descriptions and the character enhancement system.
The New Fantasy Action RPG For Android
Contains The Following Themes
– Elden Rings Bind Women to be Delivered As Daughters
– Mothers and Daughters Fall into the Hands of Evil Elden Lords
– Elden Lords Enter a Battle to Take Command
– Elden Lords Fight Heroically and Surrender Their Souls
– The War Between Elden Lords and Evil Unexpectedly Ends
– New “The Old World” Revived
– Constant Warfare Begins Between the 2 Elden Lords
– Defeat as the Elden Lords of The New World
– Defeat the Elden Lords, Become the Elden Lord of the New World
– Transformation into the Master of All Runes
– Fight For Yourself With New Runes to Breed in the New World
– Upgrade Fighting Skill in Real Time
– Raids Defeat the Enemies Who Are 1 Match Behind and Receive Rewards
– “Forest Heroine” Types Fight Together to Become Hunting Wasps
– The Super Bombers! Blasts twice and chops enemies into pieces
· A Huge World with a Variety of Exciting Locations and Challenges
· A vast world that includes open fields with a variety of situations and huge dungeons with complex and three-dimensional designs
· In addition to discovering unknown enemies, you can dig into the dungeon up to multiple floors and fight in real time
· An update system that allows you to freely customize your character according to your play style and to develop your character according to your play style, such as a strong warrior or a master of magic
G+
Playdead, the creators of Limbo, have a new game called The Unfinished Swan coming out for PC on October 30th. That means we now have the name of every game
Free Download Elden Ring (April-2022)
A common variable immunodeficiency with abnormal CD21low lymphocytes producing spontaneous IgM and IgG.
A 16-year-old boy with recurrent pneumonia, bronchiectasis and agammaglobulinemia was found to have anergy to pneumococcal antigens and to spontaneously produce immunoglobulins. In addition, he had a reduction in the number of T lymphocytes capable of expressing CD21. In this patient we observed the presence of a reduction in B lymphocytes expressing CD21 and IgM low B cells. In vitro stimulation of lymphocytes from this patient and normal donors with a panel of mitogens induced a blastic transformation, which was partially blocked by anti-CD21 antibodies. We suggest that it might be possible to identify patients with a common variable immunodeficiency (CVID) in which CD21 and CD19 are deficient. A defect of T cell function may be present in addition to the B cell defect in this group of patients.//.
// AUTO-GENERATED CODE. DO NOT EDIT.
package logging_test
import (
“cloud.google.com/go/logging/apiv2”
“golang.org/x/net/context”
clouddapi “google.golang.org/api/cloudresourcemanager/v1alpha2”
)
func ExampleCloudLoggingService_ListLogs() {
ctx := context.Background()
c, err := clouddapi.NewCloudLoggingServiceClient(ctx)
if err!= nil {
// TODO: Handle error.
}
// List all
How To Crack Elden Ring:
How to Update Elden Ring:
- Rename the last delme to serial
- Save a delme file and then open it with cheackengine.
- (If you receive an error, this is OK, so please just ignore it)
- Delete that serial
- Delete all the content (dummy,sound,exe) of the crack folder
- Restart the game and launch it
How to Fix Crashing in Elden Ring:
- Delete the content of the crack folder (the crack,the delme and the sound)
- Delete the remaining magic
- Restart the game and launch it from the menu
How to Transfer To Another Computer:
- Install the game (Use the serial, not the delme)
- Delete all the content of the crack folder (the crack,the delme and the sound)
- Delete the remaining magic
- Restart the game and launch it
How to use File Aids:
- Delete the content of the crack folder (the crack,the delme and the sound)
- Delete all the content of the crack folder (the crack,the delme and the sound)
- Delete all the content of the crack folder (the crack,the delme and the sound)
- Delete all the
System Requirements:
Multilingual, allowing German and English language.
At least 10 GB available disk space.
If you enjoy the game, please rate it on Steam. It helps everyone to make the game more visible.
Game Store Support:
Steam:
GOG:
Itch.io: | https://haitiliberte.com/advert/elden-ring-activation-skidrow-dlcwith-registration-code-2022-latest/ | CC-MAIN-2022-33 | refinedweb | 2,302 | 59.23 |
Re: wicket menu
Hi, The only difference I see between their page and TestMenu page is !--[if lte IE 7] style type=text/css html .jquerycssmenu{height: 1%;} /*Holly Hack for IE7 and below*/ /style ![endif]-- So, I just added head wicket:head !--[if lte IE 7] style type=text/css html .jquerycssmenu{height: 1%;}
Form Fiels Not updating
Hi All, I created a MasterDetailView, with a MasterView and a DetailView. In code: add(new MasterDetailView(markupId, new MembersTable(), new MemberForm()); MembersTable is a subclass of MasterView and MemberForm of DetailView. The detailview uses PropertyModels, new PropertyModel(model,
Re: + key as alternative for tab key
In every field information goes to the server to validate and then go back? Seems dificult to implement.please, tell us your experience when your are done NM On Mon, Sep 21, 2009 at 3:28 PM, Pedro Santos pedros...@gmail.com wrote: Good question, pretty much javascript work. I'm curios know
Re: wicket menu
Yes, I did try the holly hack with a wicket:head and tried inserting the html .jquerycssmenu{height: 1%;} directly into the css. Neither made any difference. I also noticed that they were using an older version of jquery 1.2.6 although I am not sure that would matter. On Tue, Sep 22, 2009 at
Re: wicket menu
Then, I don't know what's the problem As, said I can't see this happening on my IE6... Another thing we could try is making a static page using jquery-1.3.2.min.js and see if this is the issue... so that, we are sure it is not Wicket related. Best, Ernesto On Tue, Sep 22, 2009 at 2:49 PM, T
Re: Is it the best way to code a Link depending on a condition
Jeremy:you say 2 - don't call setEnabled() - override isEnabled why is better override isEnable then setEnable? thanks NM On Mon, Sep 21, 2009 at 9:44 AM, cmoulliard cmoulli...@gmail.com wrote: Joseph, Can you explain a little bit what you mean by provide it with attribute (IModelString)
Re: wicket menu
Found it. Oddly enough, it is the jquery version. I removed the ResourceReference to JQUERY in MenuBar.java and stuck the script directly into the head tag of TestMenu.html : script type=text/javascript src=;/script I also removed
Re: wicket menu
I downloaded jquery-1.2.6.min.js and replaced the ResourceReference with that. Set everything else back to original and now it is working. On Tue, Sep 22, 2009 at 9:25 AM, T Ames tamesw...@gmail.com wrote: Found it. Oddly enough, it is the jquery version. I removed the ResourceReference
Re: wicket menu
Maybe a generic solution is to include both jquery versions, sniff the browser version and if it is IE6 then serve jquery-1.2.6.min.js... Or try to find out why new jquery does not work. Maybe this has been reported on the original site? Best, Ernesto On Tue, Sep 22, 2009 at 3:25 PM, T Ames
ClassCastException in ListMultipleChoice
Hello, I have a ListMultipleChoice component in my form and I get the following exception when I select an option and submit the form: Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Collection at
Multiple choice checkboxes
Is there a way to have multiple choice checkboxes? From what I read on the forums, I could only have a ListMultipleChoice. But I need to have multiple choice checkboxes. Any examples on how I could achieve that? Thanks.
Form Processing problem on pages with Border - Wicket 1.4.1
I am migrating a project from 1.3.6 to 1.4.1. I've run into a problem and I'm not sure if this is a bug or not. Most of the pages have a border and the border has a DropDownChoice in it (as well as other components). Technically it's in a panel in the border. The Form objects are typically
Re: Form Processing problem on pages with Border - Wicket 1.4.1
The internalUpdateFormComponentModels method in 1.3.6 didn't look for the border and visit it's components. It's not obvious to me why this needs to be done now in 1.4. The objective is to notify form's event to generic containers... When the form is submitted, the DropDownChoice's value is
Re: Multiple choice checkboxes
take a look at checkGroup and check, looks like that is what you are after. please don't do duplicate posts on the list, that would ony help in annoying the, happy to help, regular repliers on this list -dipu On Tue, Sep 22, 2009 at 2:37 PM, Sadhna Ahuja sadhna.ah...@nisc.coop wrote: Is there
Re: Form Processing problem on pages with Border - Wicket 1.4.1
I mean override the updateModel :P On Tue, Sep 22, 2009 at 11:03 AM, Pedro Santos pedros...@gmail.com wrote: The internalUpdateFormComponentModels method in 1.3.6 didn't look for the border and visit it's components. It's not obvious to me why this needs to be done now in 1.4. The
Re: ClassCastException in ListMultipleChoice
your model object should be a collection instead of a string because this component allows selection of multiple values. -igor On Tue, Sep 22, 2009 at 6:37 AM, Sadhna Ahuja sadhna.ah...@nisc.coop wrote: Hello, I have a ListMultipleChoice component in my form and I get the following
How add a further authentication confirm
Dear users, I'm using wicket 1.3.6 version. I need to add a further authentication mechanism to some Link or Form I need to be able to force authentication every time on certain links and buttons when they are pressed. This behavior must be configurable dinamically, I would like to be able to
[Migrating to Wicket 1.4.1] Problem submitting FormTester
Hi, I've been having problems migrating Wicket 1.4-RC4 to 1.4.1. Some of our wicket-tests are failing and producing errors due to a form submission issue. For example: .java: public class Prueba extends PageTemplate { public Prueba() { add(new FormVoid(form).add(new Button(submit,
parameters encryption
Is there any setting in Application or any base class through which parameters can be encrypted and decrypted behind the scenes , so that I wont do this for every page ?
Re: parameters encryption
org.apache.wicket.protocol.http.request.CryptedUrlWebRequestCodingStrategy -igor On Tue, Sep 22, 2009 at 9:43 AM, tubin gen fachh...@gmail.com wrote: Is there any setting in Application or any base class through which parameters can be encrypted and decrypted behind the scenes , so that I
Re: + key as alternative for tab key
It's very easy to implement with Wicket - just add ajax form submitting behavior to desired fields, and in the behavior, repaint the feedback form, etc... Although, it can be server intensive obviously. -- Jeremy Thomerson On Tue, Sep 22, 2009 at 7:25 AM, Nicolas
508 ajaxdefaultdatatablepanel
I am using lot of ajaxdefaultdatatable and our application should be 508 , is there a way to create the sortable columns from this panel 508 ? here the code I use to create a columns new PropertyColumnWFStgAuditEntity(new ModelString(Audit Number),stgAuditGeneral.auditNumber,
form with inheritance
I am trying to create a form for a domain object called Shipment. A Shipment contains a Payment object. Payment is an abstract class, with subclasses CashPayment, CheckPayment, etc. What would be the best way to create a form to represent this? I'm assuming I will use some type tabs with panels
Re: Is it the best way to code a Link depending on a condition
It's the same reason you shouldn't use new Label(id object.getSomeText());. See this example: public MyPage () { Person p = getPersonFromSomewhere(); Label l = new Label(numberOfPhoneNumbers, p.getPhoneNumbers().size()); l.setVisible(p.getPhoneNumbers().size() 0); new
Re: form with inheritance -igor On Tue, Sep 22, 2009 at 12:16 PM, Sam Barrow s...@sambarrow.com wrote: I am trying to create a form for a domain object called Shipment. A Shipment contains a Payment object. Payment is an abstract
Validate refreshingview
Hi, I've a problem where I hope someone can point me in the right direction. I have a page which holds a form, an ajaxsubmitlink, and a panel containing formcomponents. The panel also holds two other panels, where one of the panels holds a refreshingview to which the user can add items, using a
Re: Validate refreshingview
you can use embedded forms, afair onsubmit of an inner form is called before onsubmit of outer. -igor On Tue, Sep 22, 2009 at 1:12 PM, hakan.stei...@foxt.com wrote: Hi, I've a problem where I hope someone can point me in the right direction. I have a page which holds a form, an
Stateless Applications and Wicket
It would be very nice to see better support for stateless applications in wicket. Some topics that come to my mind right now are: - Ajax, - PagingNavigator, - Dropdown onSelectionChange. It works fine if I overide getStatelessHint to return true and there is no enclosing form. If there is a | https://www.mail-archive.com/search?l=users@wicket.apache.org&q=date:20090922 | CC-MAIN-2021-31 | refinedweb | 1,538 | 64.81 |
Insertion sort is an elementary sorting algorithm that sorts one element at a time. Most humans, when sorting a deck of cards, will use a strategy similar to insertion sort. The algorithm takes an element from the list and places it in the correct location in the list. This process is repeated until there are no more unsorted items in the list. The computational complexity for insertion sort is O(n2), making it less efficient than more advanced sorting algorithms, such as quick sort, heap sort, or merge sort, especially for large lists.
C++
////////////
this is Exercise 16 code
#include <iostream>
using namespace std;
... Show more... Show more
C++
////////////
this is Exercise 16 code
#include <iostream>
using namespace std;
class HeartRates
{
private:
string firstName, lastName;
int day, month, year, age;
public:
HeartRates(string fn, string ln, int d, int m, int y){
firstName = fn;
lastName = ln;
day = d;
month = m;
year = y;
}
void setFirstName(string fn){
firstName = fn;
}
void setLastName(string ln){
lastName = ln;
}
void setDay(int d){
day = d;
}
void setMonth(int m){
month = m;
}
void setYear(int y){
year = y;
}
string getFirstName(){
return firstName;
}
string getLastName(){
return lastName;
}
int getDay(){
return day;
}
int getMonth(){
return month;
}
int getYear(){
return year;
}
int getAge(){
int cd, cm, cy;
cout << "Enter current day, month, year: ";
cin >> cd >> cm >> cy;
age = cy - year;
return age;
}
int getMaximumHeartRate(){
return 220 - age;
}
void getTargetHeartRate(){
int max = getMaximumHeartRate();
cout << 0.5 * max << " - " << 0.85 * max;
}
};
int main(){
string fn, ln;
int cd, cm, cy;
cout << "Enter first name: ";
cin >> fn;
cout << "Enter last name: ";
cin >> ln;
cout << "Enter birth day, month, year: ";
cin >> cd >> cm >> cy;
HeartRates obj(fn, ln, cd, cm, cy);
int a = obj.getAge();
cout << "Details entered:\n";
cout << "First Name: " << obj.getFirstName() << "\n";
cout << "Last Name: " << obj.getFirstName() << "\n";
cout << "Date of Birth: " << obj.getDay() << "/" << obj.getMonth() << "/" << obj.getYear() << "\n";
cout << "Age: " << a << " years\n";
cout << "Maximum heart rate: " << obj.getMaximumHeartRate() << "\n";
cout << "Target heart range: ";
obj.getTargetHeartRate();
cout << "\n\n";
}
• Show less• Show less
Realistically, users have no control over software at the code level, but there are steps a user can take to avoid the security problems resulting from software defects. What are the steps consumers can take to mitigate security vulnerabilities that exist because of programming flaws? Your response should be at least 150 words in length.• Show less
According to the Unit VIII Lesson in the Study Guide, the Software Assurance Forum for Excellence in Code produced a paper describing the fundamental practices for software development. Identify and briefly describe at least three of the principles. Your response should be at least 150 words in length.• Show less
There are four programming paradigms discussed in the textbook. Please identify all four and give a brief description of each. Your response should be at least 150 words in length.• Show less
What kinds of databases track relationships? Give a brief description and the advantages of at least three. Your response should be at least 150 words in length.• Show less
How does a query language like SQL work? Give brief descriptions of commands, parameters, and record searches. Your response should be at least 150 words in length. | http://www.chegg.com/homework-help/definitions/insertion-sort-3 | CC-MAIN-2016-07 | refinedweb | 529 | 70.63 |
04 August 2011 18:05 [Source: ICIS news]
HOUSTON (ICIS)--Adverse currency movements on the back of the ?xml:namespace>
Huntsman's chief financial officer Kimo Esplin said investors seeking a safe haven bought Swiss francs, thus driving up the value of that currency by 20% vis-a-vis the US dollar and by 10% vis-a-vis the euro in the past year.
Huntsman sells most of its Swiss-based production in euros. About 10% of Huntsman’s overall fixed costs are in Swiss francs.
The stronger Swiss franc, in turn, cost Huntsman a total of $17m (€12m) in lost second-quarter earnings before interest, tax, depreciation and amortisation (EBITDA) in its advanced materials and textiles effects business, compared with the same quarter a year ago, Esplin told analysts during the company’s second-quarter results conference call.
“We are refining our plans to address this issue,” he added.
Meanwhile, CEO Peter Huntsman ruled out an analyst’s suggestion that the company could sell the advanced materials and textiles effects businesses to a private equity firm or another chemical producer.
Rather, Huntsman expects both businesses to improve over the next quarters, he said.
“I believe those businesses are as profitable in our hands as they would be in anybody else’s hand,” he added.
Second-quarter adjusted EBITDA in Huntsman’s advanced materials segment fell 39% year on year to $31m.
In textile effects, Huntsman recorded a second-quarter adjusted EBIDA loss of $7m, compared with a profit of $8m in the same period a year ago.
Advanced materials and textile effects accounted for about 20% of Huntsman’s overall second-quarter sales of $2.93bn.
Huntsman stock was down 21% at $14.19 on Thursday morning.
($1 = €0.70) | http://www.icis.com/Articles/2011/08/04/9482693/huntsman-facing-downside-from-strong-swiss-franc-execs.html | CC-MAIN-2015-22 | refinedweb | 291 | 60.55 |
Issue: Custom Tool – getGlyph() returns None
- RafaŁ Buchner last edited by gferreira
I'm creating some awesome Custom Tool.
I'm trying to use the
self.getGlyph()method inside overridden
becomeActivemethod. When I'm using
getGlyph()in there, it returns
Noneinstead of a fontParts object.
Is there maybe another way to do it? or is it a bug? I really need this glyph object when the tool becomes active.
hello @RafaŁ-Buchner,
I don’t really know what
BaseEventTool.getGlyph()does… (so not sure if it’s a bug)
why not simply use
CurrentGlyph()instead?
here’s an example script based on simpleTool.py:
from mojo.events import BaseEventTool, installTool, uninstallTool from mojo.drawingTools import * class MyTool(BaseEventTool): def setup(self): self.position = None def mouseDown(self, point, clickCount): self.position = point def mouseDragged(self, point, delta): self.position = point def mouseUp(self, point): self.position = None def becomeActive(self): print('custom tool is active') print(self.getGlyph()) print(CurrentGlyph()) def becomeInactive(self): print('custom tool is inactive') def draw(self, scale): if self.position is not None: size = 10 x = self.position.x - size y = self.position.y - size fill(None) stroke(1, 0, 0) oval(x, y, size*2, size*2) def getToolbarTip(self): return "My Tool Tip" installTool(MyTool())
ps.
BaseEventTool.getGlyph()returns
Noneonly the first time you activate the tool in a new glyph window; the second time, it does return the current glyph:
custom tool is active None <RGlyph 'i' ('foreground') at 4691232360> custom tool is inactive custom tool is active <RGlyph 'i' ('foreground') at 4485911888> <RGlyph 'i' ('foreground') at 4691234264>
- RafaŁ Buchner last edited by
Great! Works! thanks a lot
(I thought that since getGlyph() is BaseEventTool method, it will be more 'proper' to use it instead of CurrentGlyph() ) | https://forum.robofont.com/topic/509/issue-custom-tool-getglyph-returns-none | CC-MAIN-2020-40 | refinedweb | 295 | 52.05 |
Hello friends, I have written a program which opens up a socket on port 514. This is to accept and save logs from my wireless WAP54G router. I would like to eventually be able to use a buffered reader so I can stream the input into log files locally, but I can't seem to establish a connection. The connection eventually times out, and the program exits. Here is what I have:
import java.io.*; import java.net.*; public class KnockServer { public static void main(String args[]) { KnockServer server = new KnockServer(); server.mainLoop(); } public void mainLoop() { try { ServerSocket server = new ServerSocket(514); server.setSoTimeout(100000); System.out.println("Success!"); while (true) { Socket connection = server.accept(); BufferedInputStream in = new BufferedInputStream(connection.getInputStream()); InputStreamReader inputReader = new InputStreamReader(in); StringBuffer process = new StringBuffer(); connection.close(); server.close(); } } catch (IOException e) { System.out.println("Couldn't!"); System.err.println(e.getMessage()); } catch (SecurityException e) { System.err.println(e.getMessage()); } } }
Things to note: I am running Eclipse on Ubuntu. I am running the program with super user privileges. I have another program that somebody else has written in C, which opens up port 514 that does establish a connection, and I am able to see what the router is sending.
If anybody notices anything wrong with what I have written and could please give me a direction I would appreciate it. Also, if anybody else has any suggestions of what else it could be related to, that would be helpful as well. | http://www.javaprogrammingforums.com/java-networking/8275-creating-serversocket-trying-save-logs-wireless-linksys-router.html | CC-MAIN-2013-20 | refinedweb | 247 | 50.02 |
Simple Presenters
Courtenay : August 23rd, 2007
The Presenter pattern, as my limited monkey-brain can see it, is a way of encapsulating a bunch of logic and display-related code for a database record.
If you want to be truly confused, go check out what the venerable Martin Fowler (who once ignored me in an elevator) has to say about it: Supervising Presenter and Passive View. As usual with Java people, it's horribly complex.
In Rails, this is the way our requests currently work:
The request comes in, hits the controller, we load up some data from the model, it gets pushed to the view, and then we use a combination of helpers and lots of conditional and other stuff that looks like PHP to malleate it until it looks good.
Unfortunately for my simple chimpanzee neurons, this all feels wrong. Here's some sample 'wrong' code I just paraphrased from a live app:
<% if @cart_item.variation.nil? or @cart_item.variation.product.deleted? or ... %> <img src="/images/active.png" /> This cart is no longer active. <% end %>
OK, this one is easy to refactor. You just add a method to
CartItem model, like
def is_active? variation.nil? or variation.product.deleted? ... end
Or, you could write a helper method:
module CartItemHelper def show_active(cart_item) image_tag("active") + " This cart is no longer active" if cart_item.is_active? end end
Ugh. Capital Ugh. First off, the is_active? method is nice enough, but it's all view logic! This method is being used to control the logic in the presentation layer. What is it doing in the model? I think of the model as entirely database related.
This is where the "presenter" comes in. If you've used the Liquid template/layout system, you'll be familiar with Drops. Basically, a presenter contains any ruby code related to displaying fields or logic. I'll let the code do the talking:
class CartItemPresenter < Caboose::Presenter def name h(@source.product_name) end def product_link if item.variation.nil? name else link_to name, fashion_product_path(item.product) end end def is_active? @source.variation && @source.variation.product end def inactive_button return if is_active? image_tag('active') + " This product is not active." end end
Pretty straightforward. In the controller, you finish with
@cart_item = CartItemPresenter.new(@cart_item, self)
Here's what Presenter looks like.
class Presenter include ActionView::Helpers::TagHelper # link_to include ActionView::Helpers::UrlHelper # url_for include ActionController::UrlWriter # named routes attr_accessor :controller, :source # so we can be lazy def initialize(source, controller) @source = source @controller = controller end alias :html_escape :h def html_escape(s) # I couldn't figure a better way to do this s.to_s.gsub(/&/, "&").gsub(/\"/, "\"").gsub(/>/, ">").gsub(/</, "<") #> end end
So, your view no longer accesses the database directly; everything goes through the presenter. Your views now will contain very little logic; in fact, they may start to feel a little more like ASP.Net.
Here's how the new stack looks:
And the view:
<%= @cart_item.product_link %> <%= @cart_item.inactive_button %>
What do you think? Is this layer of abstraction necessary, or do you prefer keeping things together in the model (database!) and presenter (view!) ?
P.s. I often write "todo" titles for articles and save them as drafts, so I can come back later. Seems like one slipped out.
23 Responses to “Simple Presenters”
Sorry, comments are closed for this article.
August 23rd, 2007 at 03:13 AM
This gives me warm fuzzies for refactor potential.
August 23rd, 2007 at 04:12 AM
Any suggestions about organizing the presenters (and their tests) in standard Rails project directory tree? Are there any tools that support this?
August 23rd, 2007 at 04:14 AM
As long as we're talking MVC, isactive?() should be in model imho. That's what models are there for, to add meaning to raw data and provide a better domain specific interface. I can't see anything about isactive?() that would make it a pure view related method. It could very well be used in other contexts as well.
August 23rd, 2007 at 04:40 AM
So you're adding more complexity to the system, while making your view even harder to figure out? ;-)
If you hand over your view to someone else, will it be immediately apparent to them that the link and inactive_button actually has their own logic, that they are infact mutually exclusive, and only one will be rendered? How much investigation would it require, to figure out what's actually going on?
Complexity like that is one of the features I really loathe about ASP.NET and the likes of it. IMHO, when you're reading code, any code, it should be immediately apparent what it does.
I agree with Pratik on is_active belongs to the model. Consider the fact that something other than the view could need to know whether or not a model object can be considered active.
But, I do see potential for Presenter to make it a lot easier to write tests against your view code. As with all design patterns, it takes awhile to get your head around it, and some experience to know when to use it, and when not to use it.
It's certainly something to keep an eye on :-)
August 23rd, 2007 at 05:10 AM
Presenters really shine imho when you're dealing with multi model stuff, where you'd have these hairy controllers and models that might not tie up well together as one common domain (think micro domainsor whatever you'd wanna call them).
But I kinda abuse the MVC pattern and presenter term, since my "presenters" usually are more about mini-domain things, eg a single goal/problem I want to wrap up in an object, for instance a complex account registration that's easier to test without going through a (rails) functional test, and that, as an object, represent what it's actually doing as a whole. Just don't go overboard with it for small things (yagni)
Check out Jay Fields blog for his presenter posts, good stuff there
August 23rd, 2007 at 09:13 AM
What's wrong with helpers?
August 23rd, 2007 at 09:23 AM
I've been reading various things about presenters over the last year and I still don't get it.
It feels like added complexity and confusion with extremely little benefit.
August 23rd, 2007 at 09:25 AM
I think this way of looking at things is actually very useful. The Ruby Reports system, ruport, has some presentation logic that is very similar to this -- allowing you to filter presentation before outputting to HTML, PDF, CSV, or whatever your needs are.
I can't count how many times I've written field beautification methods in my model primarily for view (or formatting for an email) purposes and felt gross while doing it. Any attempts to factor it out challenged understandability and readability by being too clever.
But this presentation logic seems like a rightful step in evolving view logic, that could compliment whatever end result view you have, just like Ruport.
The end result of what I've found with any re-imagining of views from the traditional HTML and erb perspective is that I end up trying it but not liking it, and falling back to the convention, knowing that someone who follows me will also grok it.
Eye open on this though.
August 23rd, 2007 at 09:44 AM
Thanks for the article. The presenter pattern is useful to know, but I don't think this example qualifies for it. The is_active? method should go in the model, it is acting on the model's data and doesn't have any view logic IMO. The other methods in the given presenter look like they should either go in the helper or back in the view. No sense in making things more complicated than they need to be.
August 23rd, 2007 at 10:55 AM
Ok, so the presenter pattern is cool, but how is it different from what I can do in vanilla view helpers?
August 23rd, 2007 at 12:29 PM
I've used this basic pattern in the past, however I felt that the presenter should be called a little more explicitly in the view:
The present helper method would instantiate the presenter object on the model object. Also, because this is really view logic, I feel it's best to keep it away from the controller. You don't want to load up your HTML presenter when your output format is CSV, for example.
August 23rd, 2007 at 04:17 PM
I agree with johan-s. Presenters become really useful when dealing with multiple models at the same time by giving the presenter a save method that handle all of the associations etc. before the save.
August 23rd, 2007 at 04:22 PM
I was just thinking about the proper placement of business logic in helpers earlier this week.
I had a similar problem where I and put the logic into the helper, but I thought it was odd to have HTML and business logic both in the helper. So, I was considering putting it in the model, but hesitated for similar reasons.
I'm thinking now of just having two helpers--a helper for UI and a helper for view logic--which is sort of what the Presenter does.
I'll be interested to hear about what you decide to do.
August 23rd, 2007 at 08:41 PM
I worked on a Java project where the prior team essentially built a bunch of Presenters to wrap Java objects. Major problems 2 years out:
as Morgan intuited, you add a ton of complexity for minimal gain. When something breaks, you have to go traipsing all over the place to fix it. (BTW the natural feature creep here is that there will be a new hierarchy of Presenters.) One thing I really like about Rails is that in general I can understand a page by looking at its controller and its helper and then I'm done.
the Presenter pattern isn't so hot if you're working with semi-technical designers (core expertise in HTML/CSS/Photoshop, but not afraid of PHP-like languages like Erb). In our project, design decisions that I believe are endemic to this pattern (or anti-pattern) actually made the view more brittle. See, front-end designers are NEVER going to modify the Presenter code, where they would have no problem writing simple loops etc. in Erb.
Also, look at the lovely diagrams in the post. You've basically swapped the Presenter for the Helper (missing from the latter diagram). So you create these problems and end up with the same number of stages in the pipeline. Not to mention, your Rails codebase is now a bastard that needs explaining, unlike standards-compliant code. Finally, you lose the ability to piggyback on future rails upgrades of e.g. Helpers and ActionView.
Maybe they need this in Java, but IMHO Rails got it right already.
August 24th, 2007 at 01:41 AM
I remember Bertrand Meyer saying to me once (and he probably said it in his books as well): "if you need to use an if you need to use one".
Abstraction is there to ease understanding. You stick in a new layer only when you get a genuine structural simplification. Just hiding an if statement isn't enough.
Layers are just such an abstraction. You don't get struck down by lightning because your layers aren't 'pure'.
You only get struck down if somebody else can't understand your code. :-)
August 24th, 2007 at 07:25 AM
in your controller, you will need to check to see if the item is_active? to protect against rogue data being submitted, and this is properly done in the model.
August 24th, 2007 at 11:45 AM
"Not to mention, your Rails codebase is now a bastard that needs explaining, unlike standards-compliant code. "
To be fair, the core team have expressed the possibility of including presenters as part of the framework in the future. Good ideas should not be held back only because it hasn't officially been implemented.
August 26th, 2007 at 01:19 AM
Is the 'is_active?' entirely view logic? Wouldn't the business want to know what cart item was actually active?
The abstraction seems to be tightly coupled with HTML(maybe that's the point of the presenter). But if you have an XML view would you have to duplicate the logic to convey the same meaning to the user? It seems like the logic captured whether the variation is nil or the product of a variation was deleted would be best to reside in the is_active method of the cart item.
Also the abstraction is less intention revealing like product_link, where it may or may not return a link but the method is saying it'll return a link.
August 26th, 2007 at 04:47 AM
It would make everything more testable, but in every day environment Html/Css guys would be afraid of it; helpers work very well.
August 26th, 2007 at 09:26 PM
I'm big on the presenter pattern, specifically for testability. I think the key to making the presenter pattern successful in Rails is to never call any standard Rails helpers (or markaby, etc.) in a presenter; only call custom helpers. An example is the following:
Notice that I'm not calling
link_todirectly. I can then Test this code by:
createor_destroyfriendship(atruefriend).should == destroyfriendshiplink(atruefriend)
The alternative way of testing this often involves regexps, yuck!
This is the "presenter" pattern implemented without an actual class. The class comes when things get more complex.
September 16th, 2007 at 08:58 PM
On Grabb.it we use the presenter pattern to shield our designer-selves from the possibility of triggering horrible queries in the views. Each track has depths and depths of metadata, and rebuilding its presentation can take too long for a request. So we standardized on a JSON layer that is used both in our views and in our Javascript application. Maybe someday we'll be storing it in CouchDB but for now we're just happy to have a clean caching solution. Its nice to have sweep-rebuild caches, but nicer still if you can modify them in place and avoid the whole rebuild cycle.
Another upshot of keeping the data model for the views in a JSON-style hash of strings is that we can easily reuse our templates for both client and server side view generation. This means google and noscript users see the same interface normal browsers see, and I do only a little work to translate the Javascript templates to erb.
September 17th, 2007 at 10:09 AM
I prefer the "approach posted by Jay Fields": about the use of the presenter pattern. The main idea he proposes is to merge more than one model into the presenter and use it in the view as a single piece for the sake of clarity and testing reliability rather than simply substitute model methods or helpers. Methods like is_active? are clearly part of the model.
September 19th, 2007 at 01:27 PM
One thing no one has mentioned is that presenters are often used so that the views don't modify the objects they're displaying. In this sense, Presenters give you a read-only state bag which the view merely displays in some representation.
One thing you don't want to see in your views is @cart_item.save, for instance. Presenters prevent this nonsense by exposing immutable objects.
Other things like sorting, mapping, and reducing go on behind the covers here as well. How many times have we seen stuff like this in the view:
@tags.collect {|t| t.name}.sort.each do |tag| <%= content_tag('h3', tag) %> end | http://web.archive.org/web/20100523060851/http:/blog.caboo.se/articles/2007/8/23/simple-presenters | CC-MAIN-2015-22 | refinedweb | 2,639 | 63.09 |
Developing with Performance in Mind
Developing with Performance in Mind
While testing is important, building your next project while keeping performance in mind makes subsequent testing that much more efficient. developers are faced with building a solution, they focus on "making it work." What they sometimes forget is the importance of application performance. Whether it's web, desktop or mobile device-based programming, your users want an application that responds quickly. They don't want a desktop application that hangs or a web application that takes too long to load. When you approach a development project, it's just as important to focus on performance in addition to creating a bug-free application.
Performance and Revenue
In development, businesses talk in terms of revenue. Prioritization of projects, application goals, and customer engagement are all tied with revenue. Performance metrics also tie into overall revenue for a business. Just a four-second delay in load times results in a 25% increased bounce rate on a website. This means that if the business is making, for instance, $2 million a year in revenue with 10,000 customers, poor performance could be costing them $500,000 each year.
Desktop development may not rely on web page load times, but a hanging, buggy application results in uninstalls. Installation and downloads are the two metrics that mobile device and desktop application developers watch. When we say a "hanging" app, it's one that interferes with desktop performance and overwhelms CPU and memory resources. For a user, this means that other applications and the computer or mobile device itself runs slowly. The user can identify the culprit on their machine and eventually remove it. For a business, it means the loss of a customer and revenue.
When you approach a development project, it's just as important to focus on performance in addition to creating a bug-free application.
Coding for Performance
One of the traditional statements in coding is that you only need to learn one language (preferably a C-style language), and you know them all. Learning syntax for different languages is easy. Understanding best practices for engineering and building entire platforms on any language is the difficult part.
Even if you don't know a specific language, most of them work with the same coding structures — loops, variables, functions, pointers, data types, and arrays are all examples of common structures in any language. Some languages handle memory resources and storage differently, but most of the basic performance-enhancing techniques in one language will carry over to another. For this reason, the languages used in the following ex- amples can be applied to your own coding language of choice.
Stringbuilder vs. String for Concatenation
The
StringBuilder class and object type is available in C# and Java. It's common for a developer to work with the basic string data type and build a string for output based on concatenation procedures.
StringBuilder is built for better string concatenation performance and should be used in place of traditional strings when you want to manipulate values.
Learning syntax for different languages is easy. Understanding best practices for engineering and building entire platforms on any language is the difficult part.
C#
public class BadStringConcatenation { static void Main() { string myString = ""; for (int i = 0; i < 100000; i++) { myString += " Hello "; } Console.WriteLine("String value: {0}", myString); } }
Above is an example of using the string data type for concatenation. The performance-killing issue with this code is that the compiler doesn't just keep adding "Hello" to the
myStringvariable. Instead, it takes a copy of the string, recreates the new string, and then adds the new value to
myString again. The compiler has to create an entirely new string with eachiteration. Since we have a 100,000-iteration loop, this function isn't optimal.
Instead, you should use
StringBuilder.
public class GoodStringConcatenation { static void Main() { StringBuilder myString = new StringBuilder(); for (int i = 0; i < 100000; i++) { myString.Append(" Hello "); } Console.WriteLine("String value: {0}", myString.ToString()); } }
With
StringBuilder, you no longer take copies of large string variables during each iteration. Instead, you only copy the appended string and add it to the existing one. Application speed increases, and your users aren't waiting several seconds for calculations.
Use Primitive Data Types Instead of Objects
The previous example made a case for using the
StringBuilder object, but with most procedures you want to stick with primitive data types. We'll use the Java language to illustrate why. Java has eight defined primitive types - int, double, boolean, byte, short, long, char, and float. When you can use these primitive data types, use them instead of their object data type counterpart.
The following code has two statements. The first one uses the primitive Java data type
int. The second one uses the object
Integer.
Java
int myPrimitiveInt = 100; Integer myObjectNumber = new Integer(100);
Both of these statements create an integer variable that contains the value 100. The first statement is preferred due to performance. The performance costs in the second statement stem from the way the compiler stores both of these values. Primitive data types are stored in a section of memory called the stack. Objects such as the Integer object in the second statement are stored in the stack as a reference to a second section of memory called the heap.
Source: Orange Coast College
When the compiler needs to retrieve the primitive
int variable value, it grabs it directly from the stack. However, when the compiler retrieves the second Integer object value, it grabs a reference to the heap. It then needs to go to the memory instance where the object is allocated and perform a lookup for the value. This might seem like a subtle difference, but it plays a role in performance. When you are doing low-level programming such as gaming software, these small changes make a difference in the user's experience and your application's speed.
Precision Costs You Performance and Memory
At a high level, you might think rounding at the 10th decimal point is not a big deal, but some applications require precision, particularly financial, engineering, or science applications. A fraction of a milligram can affect someone's health. Rounding to the wrong number could affect the structural integrity of a building. Precision and rounding are critical to these types of application. However, precision comes at a price: performance..
A float or double data type should be used when precision isn't important. A float is only 32 bits, and a double is 64 bits. Float is notorious for inaccurate results, so only use floats when you are exactly sure of what type of precision you need. This leaves double as the default for most programmers, but always consider memory usage before you determine which one to use.
Use FOR Instead of FOREACH
Similar to using StringBuilder instead of string for concatenation, using for instead of
foreach also has a performance benefit. This usually becomes an issue when you have a generic list and need to iterate over each value.
C#
public class ListIteration { static void Main() { List < Customer > customers = new List < Customer > foreach(var customer in customers) { Console.WriteLine("Customer name: {0}", customer.Name); } } }.
public class ListIteration
{ static void Main() { List < Customer > customers = new List < Customer > for (i = 0; i < customers.Count; i++) { Console.WriteLine("Customer name: {0}", customer[i].Name); } } }
We changed the
foreach loop to a
for , and if you test the two methods you'll see that this for loop executes much faster. With a for loop, only one
get_Item method is called, which reduces the time it takes to execute this function.
Wrap Up
Take these examples and try them out yourself. You'll notice a huge difference in execution time. Although small amounts of data might not seem like a big difference, when you are coding for businesses that rely on heavy data transactions, the difference makes a huge difference in productivity and revenue.
This article is featured in the new DZone Guide to Performance: Testing and Tuning. Get your free copy for more insightful articles, industry statistics, and more!
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/developing-with-performance-in-mind | CC-MAIN-2018-22 | refinedweb | 1,371 | 55.34 |
ATEXIT(3) BSD Programmer's Manual ATEXIT(3)
atexit - register a function to be called on exit
#include <stdlib.h> int atexit(void (*function)(void)); al- lowed as long as sufficient memory can be allocated. atexit() is very difficult to use correctly without creating exit(3)-time races. Unless absolutely necessary, please avoid using it.
The atexit() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
[ENOMEM] No memory was available to add the function to the list. The existing list of functions is unmodified.
exit(3)
The atexit(). | http://mirbsd.mirsolutions.de/htman/sparc/man3/atexit.htm | crawl-003 | refinedweb | 104 | 50.94 |
Details
- Type:
Bug
- Status:
Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 1.5.5
- Fix Version/s: 1.5.6, 1.6-beta-1
- Component/s: None
- Labels:None
Description
several methods where added named eachLine and splitEachLine. But on InputStream no encoding is given, thus a InputStreamReader created there uses the default encoding, which might be wrong in many case. I personally would add the methods with the encoding variant and remove the ones without encoding for InputStream and URL. not using an encoding makes sense for String, because the encoding is already done then, in fact using an ecoding there could be seen as wrong. But not using an encoding on InputStream is wrong too. It is not portable and will kick you later. This means:
eachLine:
InputStream.eachLine(Closure) --> should be removed
InputStream.eachLine(String encoding, Closure) --> should be added
URL.eachLine(Closure) --> should be removed
URL.eachLine(String encoding, Closure) --> could be added, but I would prefer:
URL.toInputStream() ---> I prefer that, because then eachLine from InputStream can be used
splitEachLine:
InputStream.splitEachLine(String sep, Closure closure) --> should be removed
InputStream.splitEachLine(String encoding, String sep, Closure closure) --> should be added
URL.splitEachLine(String sep, Closure closure) --> should be removed
URL.splitEachLine(String encoding, String sep, Closure closure) --> could be added, again I prefer URL.toInputStream().. or URL.input
then all these each method should not return void, instead they should return what the closure returns the last time it was called. This allows writing a collecting closure which has to be added.. something like this
class Collector extends Closure { def closure def values = [] def doCall(Object[] args) { if (closure!=null) closure(*args) values << args return values } }
Activity
I'll add back in the non-encoding versions this evening. I won't default them to UTF-8, I'll just put them back the way they were.
This is/was a breaking change and should not have appeared in 1.5.5.
Yes, it was an oversight that they were removed. I made some additions which should have caused no problems then at the last minute, we refactored some of the additions and two existing methods were caught up in that by accident. Jochen probably didn't realise that the methods were existing when he asked me to remove/refactor them (I moved all of the eachLine methods together in the source file which no doubt made that harder to detect). I should have realised this too when I did the change. The last minute nature of this change didn't help.
I don't see this as a major catastrophe. Just one of those things that happen sometimes. My take on this is that we really do need to have formal release candidates even on minor releases (though they could be over much shorter durations than we do for major releases).
The other lesson for me is that we need to up the coverage of our tests. If this change had caused a test to fail, it would have been spotted immediately and not have slipped through undetected.
Agreed on all points
So I guess the question is whether:
def process = command.execute ( ) process.in.eachLine ( doSomething )
should work or not.
def process = command.execute ( ) process.in.eachLine ( doSomething )
Despite the discussions on why System.in.readLine() was deprecated a while ago, perhaps we could also revisit this.
By default, System.in.readLine() could be restored and use the default system encoding, and we could add variants which also explicitely take an encoding in parameter.
System.in.readLine() is pretty handy.
OK, added back in the two accidentally refactored-out-of-existence methods.
For good measure, I also added InputStream.splitEachLine(String sep, Closure closure) back in as it was now a hole in the set of available methods. Caveat Emptor as for all the other methods without the charset.
I also added in these as they can help deal with the issues around StreamDecoder usage with InputStreamReader:
withReader(URL url, String charset, Closure closure)
withReader(InputStream in, String charset, Closure closure)
newReader(InputStream self, String charset)
I am not sure how to handle System.in.readLine(). In hindsight, if I was starting from scratch with such methods in Groovy, I would probably get rid of all line processing methods on InputStreams and URLs and just have ways to get Readers from them and have all line processing in terms of readers.
File and URL have an eachLine method. So I wonder why File and URL do not have an encoding property.
This would be quite natural, in my opinion, since Groovy extends the java.io.File semantics to also cover the file contents.
For File this could probably be done by implementing groovy.io.File extends java.io.File.
I guess it might be possible but eachLine etc. sets up a reader under the covers and it is that reader that has an encoding. You could just as easily do an eachByte on a File or URL in which case no encoding would be involved.
Yes, the reader is being constructed from the File via the CharsetToolkit.
If I understand the code correctly, CharsetToolkit tries to determine the encoding from byte order marks or the file character data. This is a good approach for text/plain files even though it may fail for non-unicode charsets. If Groovy would provide File with a charset property (by deriving an own File class from java.io.File or by a generic property extension mechanism) this, if present, could override the charset choosen by CharsetToolkit.
I do not know if the benefits of may proposal would outweigh the drawbacks of increased complexity in the Groovy runtime. So I think it needs some more discussion.
How would such a Groovy specific file class be better than the encoding property java uses to set the file default encoding?
For the same reason as InputStreamReader has constructors with charsets.
The default file encoding is a per-VM property whereas the files a VM processes may have many different encodings.
I guess I did not explain myself very good... there are different concepts of files and in Java a file is more like a reference to something that might exist on the file system. For the contents of the file you have streams and reader. Therefor the encoding is something the reader and/or stream have to handle, not the file itself. There is a property to support a default encoding, so he lazy programmer does not have to use the encoding all other the place. I must say, I personally prefer the explicit usage of the encoding, because I am often on systems using multiple encodings and relaying on the default encoding has often been proofed to be a bad idea. To say it more strict, I generally dislike the usage of a default encoding in any kind... but well, it is not really my job to educate other people here. Only, when adding a new concept to Groovy I would of course prefer one that has not this kind of thinking error built in. now your "files" differ from the Java concept. They are a concept on their own, and it seems you want to use them instead of normal files. But in this case it is nothing we have to add to java.io.File. No, not right... you want to use it in places where File is used. Right? But we can not change the bytecode for for example InputStreamReader, that if it uses a FileInputStream, that uses "our" files, that not the default encoding for Java is used, but the encoding you did set on the file. So this would only work well in the Groovy sphere.. and in that case.. why use java.io.File directly? It would be so easy to write a small File handling API for reading ext files on its own .
So to answer your question why File and URL have no encoding property: Because they are references to things that may or may not exist, and that may or not be text data in a certain encoding. They could be binary data as well. And not to forget, that encoded text might be read as binary data too!
From the technical side we have the problem, that we can not attach foreign per instance properties. We can also not change the bytecode of libs, just to support an idea of files that conflicts with the original idea of files in Java.
I fully agree with what you say about Java file objects as references (names) and the explicit usage of encodings with streams.
As said above, methods like File.eachLine, File.append, and so on, seemingly extend the file semantics to also cover the contents of the file. This was the reason to think about file properties such as charset that might complete the notion of file = name + metadata + content.
Technically a extension to java.io.File (and other classes) could be achieved e.g. by providing a class that extends java.io.File, or through a more general approach, by aggregating per instance properties to arbitrary objects. The latter could be implemented without any changes to bytecode, e.g. using weak hash maps with the object instances as keys. However, as already said, I'm not quite sure if one would like to have such constructs in the implementation of a programming language since they increase complexity and lead to additional synchronization overhead.
Changes made as requested. It does beg the question though, we sometimes are going to return the last closure result. Other times we return the self object to allow chaining. It would be nice to have a consistent rule for this. | http://jira.codehaus.org/browse/GROOVY-2749 | crawl-003 | refinedweb | 1,626 | 65.22 |
OpenWhisk is an event-driven compute platform also referred to as Serverless computing or as Function as a Service (FaaS) that runs code in response to events or direct invocations. The following figure shows the high-level OpenWhisk architecture.
Examples of events include changes to database records, IoT sensor readings that exceed a certain temperature, new code commits to a GitHub repository, or simple HTTP requests from web or mobile apps. Events from external and internal event sources are channeled through a trigger, and rules allow actions to react to these events.
Actions can be small snippets of code (JavaScript, Swift and many other languages are supported), or custom binary code embedded in a Docker container. Actions in OpenWhisk are instantly deployed and executed whenever a trigger fires. The more triggers fire, the more actions get invoked. If no trigger fires, no action code is running, so there is no cost.
In addition to associating actions with triggers, it is possible to directly invoke an action by using the OpenWhisk API, CLI, or iOS SDK. A set of actions can also be chained without having to write any code. Each action in the chain is invoked in sequence with the output of one action passed as input to the next in the sequence.
With traditional long-running virtual machines or containers, it is common practice to deploy multiple VMs or containers to be resilient against outages of a single instance. However, OpenWhisk offers an alternative model with no resiliency-related cost overhead. The on-demand execution of actions provides inherent scalability and optimal utilization as the number of running actions always matches the trigger rate. Additionally, the developer now only focuses on code and does not worry about monitoring, patching, and securing the underlying server, storage, network, and operating system infrastructure.
Integrations with additional services and event providers can be added with packages. A package is a bundle of feeds and actions. A feed is a piece of code that configures an external event source to fire trigger events. For example, a trigger that is created with a Cloudant change feed will configure a service to fire the trigger every time a document is modified or added to a Cloudant database. Actions in packages represent reusable logic that a service provider can make available so that developers not only can use the service as an event source, but also can invoke APIs of that service.
An existing catalog of packages offers a quick way to enhance applications with useful capabilities, and to access external services in the ecosystem. Examples of external services that are OpenWhisk-enabled include Cloudant, The Weather Company, Slack, and GitHub.
Being an open-source project, OpenWhisk stands on the shoulders of giants, including Nginx, Kafka, Docker, CouchDB. All of these components come together to form a “serverless event-based programming service”. To explain all the components in more detail, lets trace an invocation of an action through the system as it happens. An invocation in OpenWhisk is the core thing a serverless-engine does: Execute the code the user has fed into the system and return the results of that execution.
To give the explanation a little bit of context, let’s create an action in the system first. We will use that action to explain the concepts later on while tracing through the system. The following commands assume that the OpenWhisk CLI is setup properly.
First, we’ll create a file action.js containing the following code which will print “Hello World” to stdout and return a JSON object containing “world” under the key “hello”.
function main() { console.log('Hello World'); return { hello: 'world' }; }
We create that action using.
wsk action create myAction action.js
Done. Now we actually want to invoke that action:
wsk action invoke myAction --result
What actually happens behind the scenes in OpenWhisk?
First: OpenWhisk’s user-facing API is completely HTTP based and follows a RESTful design. As a consequence, the command sent via the
wsk CLI is essentially an HTTP request against the OpenWhisk system. The specific command above translates roughly to:
Note the $userNamespace variable here. A user has access to at least one namespace. For simplicity, let’s assume that the user owns the namespace where myAction is put into.
The first entry point into the system is through nginx, “an HTTP and reverse proxy server”. It is mainly used for SSL termination and forwarding appropriate HTTP calls to the next component.
Not having done much to our HTTP request, nginx forwards it to the Controller, the next component on our trip through OpenWhisk. It is a Scala-based implementation of the actual REST API (based on Akka and Spray) and thus serves as the interface for everything a user can do, including CRUD requests for your entities in OpenWhisk and invocation of actions (which is what we’re doing right now).
The Controller first disambiguates what the user is trying to do. It does so based on the HTTP method you use in your HTTP request. As per translation above, the user is issuing a POST request to an existing action, which the Controller translates to an invocation of an action.
Given the central role of the Controller (hence the name), the following steps will all involve it to a certain extent.
Now the Controller verifies who you are (Authentication) and if you have the privilege to do what you want to do with that entity (Authorization). The credentials included in the request are verified against the so-called subjects database in a CouchDB instance.
In this case, it is checked that the user exists in OpenWhisk’s database and that it has the privilege to invoke the action myAction, which we assumed is an action in a namespace the user owns. The latter effectively gives the user the privilege to invoke the action, which is what he wishes to do.
As everything is sound, the gate opens for the next stage of processing.
As the Controller is now sure the user is allowed in and has the privileges to invoke his action, it actually loads this action (in this case myAction) from the whisks database in CouchDB.
The record of the action contains mainly the code to execute (shown above) and default parameters that you want to pass to your action, merged with the parameters you included in the actual invoke request. It also contains the resource restrictions imposed on it in execution, such as the memory it is allowed to consume.
In this particular case, our action doesn’t take any parameters (the function’s parameter definition is an empty list), thus we assume we haven’t set any default parameters and haven’t sent any specific parameters to the action, making for the most trivial case from this point-of-view.
The Load Balancer, which is part of the Controller, has a global view of the executors available in the system by checking their health status continuously. Those executors are called Invokers. The Load Balancer, knowing which Invokers are available, chooses one of them to invoke the action requested.
From now on, mainly two bad things can happen to the invocation request you sent in:
The answer to both is Kafka, “a high-throughput, distributed, publish-subscribe messaging system”. Controller and Invoker solely communicate through messages buffered and persisted by Kafka. That lifts the burden of buffering in memory, risking an OutOfMemoryException, off of both the Controller and the Invoker while also making sure that messages are not lost in case the system crashes.
To get the action invoked then, the Controller publishes a message to Kafka, which contains the action to invoke and the parameters to pass to that action (in this case none). This message is addressed to the Invoker which the Controller chose above from the list of available invokers.
Once Kafka has confirmed that it got the message, the HTTP request to the user is responded to with an ActivationId. The user will use that later on, to get access to the results of this specific invocation. Note that this is an asynchronous invocation model, where the HTTP request terminates once the system has accepted the request to invoke an action. A synchronous model (called blocking invocation) is available, but not covered by this article.
The Invoker is the heart of OpenWhisk. The Invoker’s duty is to invoke an action. It is also implemented in Scala. But there’s much more to it. To execute actions in an isolated and safe way it uses Docker.
Docker is used to setup a new self-encapsulated environment (called container) for each action that we invoke in a fast, isolated and controlled way. In a nutshell, for each action invocation a Docker container is spawned, the action code gets injected, it gets executed using the parameters passed to it, the result is obtained, the container gets destroyed. This is also the place where a lot of performance optimization is done to reduce overhead and make low response times possible.
In our specific case, as we’re having a Node.js based action at hand, the Invoker will start a Node.js container, inject the code from myAction, run it with no parameters, extract the result, save the logs and destroy the Node.js container again.
As the result is obtained by the Invoker, it is stored into the activations database as an activation under the ActivationId mentioned further above. The activations database lives in CouchDB.
In our specific case, the Invoker gets the resulting JSON object back from the action, grabs the log written by Docker, puts them all into the activation record and stores it into the database. It will look roughly like this:
{ "activationId": "31809ddca6f64cfc9de2937ebd44fbb9", "response": { "statusCode": 0, "result": { "hello": "world" } }, "end": 1474459415621, "logs": [ "2016-09-21T12:03:35.619234386Z stdout: Hello World" ], "start": 1474459415595, }
Note how the record contains both the returned result and the logs written. It also contains the start and end time of the invocation of the action. There are more fields in an activation record, this is a stripped down version for simplicity.
Now you can use the REST API again (start from step 1 again) to obtain your activation and thus the result of your action. To do so you’d use:
wsk activation get 31809ddca6f64cfc9de2937ebd44fbb9
We’ve seen how a simple wsk action invoke myAction passes through different stages of the OpenWhisk system. The system itself mainly consists of only two custom components, the Controller and the Invoker. Everything else is already there, developed by so many people out there in the open-source community.
You can find additional information about OpenWhisk in the following topics: | https://apache.googlesource.com/openwhisk/+/HEAD/docs/about.md | CC-MAIN-2020-45 | refinedweb | 1,784 | 52.49 |
In the context of the F# language, a module is a grouping of F# code, such as values, types, and function values, in an F# program. Grouping code in modules helps keep related code together and helps avoid name conflicts in your program.
Syntax
// Top-level module declaration. module [accessibility-modifier] [qualified-namespace.]module-name declarations // Local module declaration. module [accessibility-modifier] module-name = declarations
Remarks
An F# module is a grouping of F# code constructs such as types, values, function values, and code in
do bindings. It is implemented as a common language runtime (CLR) class that has only static members. There are two types of module declarations, depending on whether the whole file is included in the module: a top-level module declaration and a local module declaration. A top-level module declaration includes the whole file in the module. A top-level module declaration can appear only as the first declaration in a file.
In the syntax for the top-level module declaration, the optional qualified-namespace is the sequence of nested namespace names that contains the module. The qualified namespace does not have to be previously declared.
You do not have to indent declarations in a top-level module. You do have to indent all declarations in local modules. In a local module declaration, only the declarations that are indented under that module declaration are part of the module.
If a code file does not begin with a top-level module declaration or a namespace declaration, the whole contents of the file, including any local modules, becomes part of an implicitly created top-level module that has the same name as the file, without the extension, with the first letter converted to uppercase. For example, consider the following file.
// In the file program.fs. let x = 40
This file would be compiled as if it were written in this manner:
module Program let x = 40
If you have multiple modules in a file, you must use a local module declaration for each module. If an enclosing namespace is declared, these modules are part of the enclosing namespace. If an enclosing namespace is not declared, the modules become part of the implicitly created top-level module. The following code example shows a code file that contains multiple modules. The compiler implicitly creates a top-level module named
Multiplemodules, and
MyModule1 and
MyModule2 are nested in that top-level module.
// In the file multiplemodules.fs. // MyModule1 module MyModule1 = // Indent all program elements within modules that are declared with an equal sign. let module1Value = 100 let module1Function x = x + 10 // MyModule2 module MyModule2 = let module2Value = 121 // Use a qualified name to access the function. // from MyModule1. let module2Function x = x * (MyModule1.module1Function module2Value)
If you have multiple files in a project or in a single compilation, or if you are building a library, you must include a namespace declaration or module declaration at the top of the file. The F# compiler only determines a module name implicitly when there is only one file in a project or compilation command line, and you are creating an application.
The accessibility-modifier can be one of the following:
public,
private,
internal. For more information, see Access Control. The default is public.
Referencing Code in Modules
When you reference functions, types, and values from another module, you must either use a qualified name or open the module. If you use a qualified name, you must specify the namespaces, the module, and the identifier for the program element you want. You separate each part of the qualified path with a dot (.), as follows.
Namespace1.Namespace2.ModuleName.Identifier
You can open the module or one or more of the namespaces to simplify the code. For more information about opening namespaces and modules, see Import Declarations: The
open Keyword.
The following code example shows a top-level module that contains all the code up to the end of the file.
module Arithmetic let add x y = x + y let sub x y = x - y
To use this code from another file in the same project, you either use qualified names or you open the module before you use the functions, as shown in the following examples.
// Fully qualify the function name. let result1 = Arithmetic.add 5 9 // Open the module. open Arithmetic let result2 = add 5 9
Nested Modules
Modules can be nested. Inner modules must be indented as far as outer module declarations to indicate that they are inner modules, not new modules. For example, compare the following two examples. Module
Z is an inner module in the following code.
module Y = let x = 1 module Z = let z = 5
But module
Z is a sibling to module
Y in the following code.
module Y = let x = 1 module Z = let z = 5
Module
Z is also a sibling module in the following code, because it is not indented as far as other declarations in module
Y.
module Y = let x = 1 module Z = let z = 5
Finally, if the outer module has no declarations and is followed immediately by another module declaration, the new module declaration is assumed to be an inner module, but the compiler will warn you if the second module definition is not indented farther than the first.
// This code produces a warning, but treats Z as a inner module. module Y = module Z = let z = 5
To eliminate the warning, indent the inner module.
module Y = module Z = let z = 5
If you want all the code in a file to be in a single outer module and you want inner modules, the outer module does not require the equal sign, and the declarations, including any inner module declarations, that will go in the outer module do not have to be indented. Declarations inside the inner module declarations do have to be indented. The following code shows this case.
// The top-level module declaration can be omitted if the file is named // TopLevel.fs or topLevel.fs, and the file is the only file in an // application. module TopLevel let topLevelX = 5 module Inner1 = let inner1X = 1 module Inner2 = let inner2X = 5 | https://docs.microsoft.com/en-us/dotnet/articles/fsharp/language-reference/modules | CC-MAIN-2017-22 | refinedweb | 1,022 | 54.12 |
my name is glen lacock
I a student at uop and a writer
My first book has been writing and is on B and N nook.
I here for some help with my code
I have one erro that I can not find
Please any help would be great
glen
/*Programmer: GLen Lacock * Date: 3/22/2011 * Filename: helloworld * Purpose: This program to show hello world and jlabel */ /* java libraries*/ import java.awt.*; import javax.swing.*; public class Helloworld extends JFrame { public static void main(String argv[]) { new SwingHelloWorld("I am Student at UOP"); } } public HelloWorld(String name) { super(name); this.setDefaultCloseOperation(EXIT_ON_CLOSE); JLabel label = new JLabel("Hello, world!"); label.setPreferredSize(new Dimension(210, 80)); this.getContentPane().add(label); this.pack(); this.setVisible(true); } | http://www.javaprogrammingforums.com/awt-java-swing/8096-hi-java-swing-errors.html | CC-MAIN-2015-40 | refinedweb | 124 | 50.36 |
Ruby is an open source, general programming language that has been influenced by Perl, Smalltalk, Eiffel, Ada and Lisp. It has an elegant syntax, and is focused on simplicity and productivity. Ruby is easy to learn and works on the principle of ‘writing more with less’.
Programming languages and spoken languages are quite similar; both have one or more categories. A few programming language categories you might have heard of include imperative, object-oriented, functional or logic-based. Ruby is a powerful and dynamic open source, object-oriented language created by a developer known as Matz, and runs on almost all platforms such as Linux, Windows, MacOS, etc. Every programmer’s first computer program is usually the ‘Hello World Program’, as shown below.
Are you, now, more interested in Ruby? Ruby is one of the easiest languages to learn as it focuses on the productivity of program development. It is a server-side scripting language and has features that are similar to those of Smalltalk, Perl and Python.
Ruby and Ruby on Rails
Ruby is a programming language while Ruby on Rails, or simply Rails, is a software library that extends the Ruby programming language. Rails is a software framework dependent on the Ruby programming language for creating Web applications.
Web applications can also be written in Ruby, but writing a fully functional Web application from scratch in Ruby is a daunting task. Rails is a collection of pre-written code that makes writing Web applications easier and helps make them simpler to maintain.
Still confused? Think of how a pizza is made. You simply spread the tomato sauce on the pizza base, top it with the veggies and spread the grated cheese. But where did the pizza base come from? It’s easier to get it from the grocery store instead of baking your own using flour and water. In this case, the Ruby programming language is the flour and water. By learning Ruby, you are a step closer to Rails and can create Web applications like Twitter one day.
Application domains
Text processing: Ruby can be embedded into HTML, and has a clean and easy syntax that allows a new developer to learn it very quickly and easily.
CGI programming: Ruby can be used to write Common Gateway Interface (CGI) scripts. It can easily be connected to databases like DB2, MySQL, Oracle and Sybase.
Network programming: Network programming can be fun with Ruby’s well-designed socket classes.
GUI programming: Ruby supports many GUI tools such as Tcl/Tk, GTK and OpenGL.
XML programming: Ruby has a rich set of built-in functions, which can be used directly into XML programming.
Prototyping: Ruby is often used to make prototypes that sometimes become production systems by replacing the bottlenecks with extensions written in C.
Programming fundamentals
1. Instructions and interpreters
Ruby is an ‘interpreted’ programming language, which means it can’t run on your processor directly, but has to be fed into a middleman called the ‘virtual machine’ or VM. The VM takes in Ruby code on one side, and speaks natively to the operating system and the processor on the other. The benefit of this approach is that you can write Ruby code once and, typically, execute it on many different operating systems and hardware platforms. A Ruby program can’t run on its own; you need to load the VM. There are two ways of executing Ruby with the VM — through IRB and through the command line.
Running Ruby from the command line: This is the durable way of writing Ruby code, because you save your instructions into a file. This file can then be backed up, transferred, added to source control, etc.
We might create a file named my_program.rb like this:
class Sample def hello puts “Hello World” end end s=Sample.new s.hello
Then we could run the program in the terminal like this:
$ruby my_program.rb Hello World
When you run ruby my_program.rb you’re actually loading the Ruby virtual machine, which in turn loads your my_program.rb.
Running Ruby from IRB: IRB stands for ‘interactive Ruby’ and is a tool you can use to interactively execute Ruby expressions read from the standard input.
The irb command from your shell will start the interpreter.
2. Variables
Programming is all about creating abstractions, and in order to create an abstraction we must be able to assign names to things. Variables are a way of creating a name for a piece of data. In some languages you need to specify what type of data (like a number, word, etc) can go in a certain variable. Ruby, however, has a flexible type system where any variable can hold any kind of data.
Creating and assigning a variable: In some languages you need to ‘declare’ a variable before you assign a value to it. Ruby variables are automatically created when you assign a value to them. Let’s try an example:
C:\Users\Administrator>irb irb(main):001:0> a = 5 => 5 irb(main):002:0> a => 5
The line a = 5 creates the variable named ‘a’ and stores the value ‘5’ into it.
Right side first: In English, we read left to right, so it’s natural to read code left to right. But when evaluating an assignment using the single equals (=), Ruby actually evaluates the right side first. Take the following example:
C:\Users\Administrator>irb irb(main):001:0> b = 10 + 5 b => 15
The 10 + 5 is evaluated first, and the result is given the name ‘b’.
Flexible typing: Ruby’s variables can hold any kind of data, and can even change the type of data they hold. For instance:
C:\Users\Administrator>irb irb(main):001:0> c = 20 => 20 irb(main):002:0> c = “hello” => “hello”
The first assignment gave the name ‘c’ to the number 20. The second assignment changed ‘c’ to the value ‘hello’.
3. Strings
In the real world, strings tie things up. Programming strings have nothing to do with real-world strings. Programming strings are used to store collections of letters and numbers. That could be a single letter like ‘a’, a word like ‘hi’, or a sentence like ‘Hello my friends’.
Writing a string: A Ruby string is defined as a quote sign (“) followed by a zero or more letters, numbers or symbols, which are followed by a closing quote sign (”). The shortest possible string is called the empty string: “”. It’s not uncommon for a single string to contain paragraphs or even pages of text.
Common string methods: Let’s experiment with strings and some common methods in IRB.
The .length method tells you how many characters (including spaces) are in the string:
C:\Users\Administrator>irb irb(main):001:0> greeting = “hello everyone!” => “hello everyone!” irb(main):002:0> greeting.length => 15
Often, you’ll have a string that you want to break into parts. For instance, imagine you have a sentence stored in a string and want to break it into words:
C:\Users\Administrator>irb irb(main):001:0> sentence = “this is my sample sentence” => “this is my sample sentence” irb(main):002:0> sentence.split => [“this”, “is”, “my”, “sample”, “sentence”]
The .split method gives you back an array, which we’ll learn about later in this article. It cuts the string wherever it encounters a space (“ ’’) character.
Combining strings and variables: Very often, we want to combine the value of a variable with a string. For instance, let’s start with this example string:
“Good morning, Frank!”
When we put that into IRB, it just spits back the same string. If we were writing a proper program, we’d want it to greet users with their name rather than ‘Frank’. What we need to do is combine a variable with the string. There are two ways of doing this.
String concatenation: The simplistic approach is called ‘string concatenation’, which is joining strings together with the plus sign:
C:\Users\Administrator>irb irb(main):001:0> name = “Frank” irb(main):002:0> puts “Good morning, ” + name + “!”
In the first line, we set up a variable to hold the name. In the second line, we print the string “Good morning”, combined with the value of the variable name and the string “!”.
String interpolation: The second approach is to use string interpolation, where we stick data into the middle of a string. String interpolation only works on a double-quoted string. Within the string we use the interpolation marker #{}. Inside those brackets, we can put any variable or Ruby code, which will be evaluated, converted to a string, and output in that spot of the outer string. Our previous example could be rewritten like this:
C:\Users\Administrator>irb irb(main):001:0> name = “Frank” irb(main):002:0> puts “Good morning, #{name}!”
If you compare the outputs, you’ll see that they give the same results. The interpolation style tends to have fewer characters to type, and fewer open/close quotes and plus signs to forget.
4. Numbers
There are two basic kinds of numbers: integers (whole numbers) and floats (have a decimal point). Integers are much easier for both you and the computer to work with. You can use normal math operations with integers including +, -, /, and *. Integers have a bunch of methods to help you do math-related things, which you can see by calling 5.methods.
Repeating instructions: A common pattern in other languages is the ‘for’ loop, used to repeat an instruction a set number of times. For example, in JavaScript you might write:
for(var i = 0; i < 5; i++) { console.log(“Hello, World”); }
‘For’ loops are common, but they’re not very readable. Because Ruby’s integers are objects, they have methods. One of these is the .times method to repeat an instruction a set number of times.
You can rewrite the above loop in Ruby, as follows:
5.times do puts “Hello, World!” end
In this example, we’re using both the .times method and what’s called a block.
5. Blocks
Blocks are a powerful concept used frequently in Ruby. Think of them as a way of bundling up a set of instructions for use elsewhere.
Starting and ending blocks: You just saw a block used with the .times method on an integer. The block starts with the keyword ‘do’ and ends with the keyword ‘end’. The do/end style is always acceptable.
Bracket blocks: When a block contains just a single instruction, though, we often use the alternate markers ‘{’ and ‘}’ to begin and end the block:
5.times{ puts “Hello, World!” }
6. Arrays
Usually, when we’re writing a program it’s because we need to deal with a collection of data. There are lots of cool things to do with an array. Here are a few examples.
.sort: The sort method will return a new array where the elements are sorted. If the elements are strings, they’ll come back in alphabetical order. If they’re numbers, they’ll come back in ascending value order. Try these:
C:\Users\Administrator>irb irb(main):001:0> one = [“this”, “is”, “an”, “array”] irb(main):002:0> one.sort irb(main):003:0> one
You can rearrange the order of the elements using the sort method. You can iterate through each element using the each method. You can mash them together into one string using the join method. You can find the address of a specific element by using the index method. You can ask an array if an element is present with the include method. We use arrays whenever we need a list in which the elements are in a specific order.
7. Hashes
A hash is a collection of data, in which each element of the data is addressed by a name. A hash is an unordered collection, where the data is organised into ‘key/value pairs’. Hashes have a more complicated syntax that takes some getting used to:
C:\Users\Administrator>irb irb(main):001:0> produce = {“apples” => 3, “kiwi” => 1} irb(main):002:0> puts “There are #{produce[‘apples’]} apples in the fridge.”
Simplified hash syntax: We commonly use symbols as the keys of a hash. When all the keys are symbols, then there is a shorthand syntax that can be used:
C:\Users\Administrator>irb irb(main):001:0> produce = {“apples” => 3, “kiwi” => 1} irb(main):002:0> puts “There are #{produce[:apples]} apples in the fridge.”
Notice that the keys end with a colon rather than beginning with one, even though these are symbols.
8. Conditionals
Conditional statements evaluate to true or false. The most common conditional operators are == (equal), > (greater than), >= (greater than or equal to), < (less than), and <= (less than or equal to).
Conditional branching/instructions: Why do we have conditional statements? Most often, it’s to control conditional instructions, especially if/elseif/else structures. Let’s write an example by adding a method like this in IRB:
def water_status(minutes) if minutes < 7 puts “The water is not boiling yet” elsif minutes == 7 puts “It’s just barely boiling” elsif minutes == 8 puts “It’s boiling!” else puts “Hot! Hot! Hot!” end end
Try running the method with water_status(5), water_status(7), water_status(8) and water_status(9).
Understanding the execution flow: When the minutes are 5, here is how the execution goes: “Is it true that 5 is less than 7? Yes, it is; so print the line The water is not boiling yet”.
When the minutes are 7, it goes like this: “Is it true that 7 is less than 7? No. Next, is it true that 7 is equal to 7? Yes, it is; so print the line It’s just barely boiling”.
When the minutes are 8, it goes like this: “Is it true that 8 is less than 7? No. Next, is it true that 8 is equal to 7? No. Next, is it true that 8 is equal to 8? Yes, it is; so print the line It’s boiling!”.
Lastly, when the total is 9, the execution goes like this: “Is it true that 9 is less than 7? No. Next, is it true that 9 is equal to 7? No. Next, is it true that 9 is equal to 8? No. Since none of these are true, execute the else and print the line Hot! Hot! Hot!.
Connect With Us | http://opensourceforu.com/2016/10/ruby-programming-fundamentals/ | CC-MAIN-2017-09 | refinedweb | 2,401 | 64.51 |
Here is what I have so far
from tkinter import * import random class Histogram: def __init__(self): self.height = 300 self.width = 600 #Create window window = Tk() window.title = ("Histogram") self.canvas = Canvas(window, bg = "white", width = self.width, height = self.height) self.canvas.pack() #Create frame frame = Frame(window) frame.pack() #Create buttons buttonSort = Button(frame, text = "Sort", command = self.sort) buttonSort.pack(side = LEFT) buttonreset = Button(frame, text = "Reset", command = self.reset) buttonreset.pack(side = LEFT) buttonCreateList = Button(frame, text = "Create List", command = self.createList) buttonCreateList.pack(side = LEFT) window.mainloop() def createList(self): self.lst = [x for x in range(1,21)] random.shuffle(self.lst) #for i in range(len(self.lst)): #self.canvas.create_rectangle(self.lst[i], 10, self.lst[i], 10) print(self.lst) def sort(self): self.lst.sort() print("List sorted") print(self.lst) def reset(self): self.lst = [] print("List reset") print(self.lst) Histogram()
I'm having some serious issues when it comes to drawing the rectangles on the screen. How can I draw them so that each rectangle drawn is the length of the corresponding integer?
So a random list might look like this
[1,3,5,7,9,2,4,6,8.....]
So rectangle 1 would be length 1 width 0.5
rectangle 2 would be length 3 width 0.5
etc
The when I press the sort button it sorts them in size order and repaints them. This is probably out of my depth but I thought I would have a bash at the problem anyway.
This post has been edited by The Chief: 01 February 2015 - 07:44 PM | http://www.dreamincode.net/forums/topic/369576-tkinter-drawing-rectangles/ | CC-MAIN-2018-22 | refinedweb | 273 | 63.96 |
Four HTTP Request Body Interfaces
February 19, 2010
Michael Snoyman
Sorry for the long delay since my last post, I've actually been working on a project recently. It should be released last week, but since it's a web application programmed in Haskell, it's given me a chance to do some real-world testing of WAI, wai-extra (my collection of handlers and middlewares) and Yesod (my web framework). None of these have been released yet, though that date is fast approaching.
Anyway, back to the topic at hand: request body interface. I'm going to skip over the response body for now because, frankly, I think it's less contraversial: enumerators seem to be a good fit. What flavor of enumerator could be a good question, but I'd rather figure out what works best on the request side and then choose something that matches nicely.
I've been evaluating the choices in order to decide what to use in the WAI. In order to get a good comparison of the options, let's start off by stating our goals:
- Performant. The main goal of the WAI is not user-friendliness, but to be the most efficient abstraction over different servers possible.
- Safe. You'll see below some examples of being unsafe.
- Determinstic. We want to make sure that we are never forced to use more than a certain amount of memory.
- Early termination. We shouldn't be forced to read the entire contents of a long body, as this could open up DoS attacks.
- Simple. Although user-friendliness isn't the first goal, it's still something to consider.
- Convertible. In particular, many people will be most comfortable (application-side) using cursors and lazy bytestrings, so we'd like an interface that can be converted to those.
One other point: we're not going to bother considering anything but bytestrings here. I think the reasons for this are obvious.
Lazy bytestring
This is the approach currently used by Hack, which is pretty well used and accepted by the community (including myself).
Pros
- Allows both the server and client to write simple code.
- Lots of tools to support it in standard libraries.
- Mostly space efficient.
Cons
- I said mostly space efficient, because you only get the space efficiency if you use lazy I/O. Lazy I/O is also known as unsafeInterleaveIO. Remember that concern about safety I mentioned above? This is it.
- Besides that, lazy I/O is non-deterministic.
In fact, avoiding lazy I/O is the main impetus for writing the WAI. I don't consider this a possible solution.
Source
The inspiration for this approach is- frankly- every imperative IO library on the planet. Think of Handle: you have functions to open the handle, close it, test if there's more data (isEOF) and to get more data. In our case, there's no need for the first two (the server performs them before and after calling the application, respectively), so we can actually get away with this definition:
type Source = IO (Maybe ByteString) -- a strict bytestring
Each time you call Source, it will return the next bytestring in the request, until the end, where a Nothing is returned.
Pros
- Simple and standard.
- Deterministic.
- Space efficient.
Cons
- This makes the server the callee, not the caller. In general, it is more difficult to write callees, though in the particular case of a server I'm not certain how much more difficult it really is.
- This provides no mechanism for the server to keep state (eg, bytes read so far).
Overall, this is a pretty good approach nonetheless. Also, at the cost of complicating things a bit, we could redefine Source as:
type Source a = a -> IO (Maybe (a, ByteString))
This would solve the second of the problems above by forcing the application to thread the state through.
Recursive enumerator
The idea for the recursive enumerator comes from a few sources, but I'll cite Hyena for the moment. The idea takes a little bit of time to wrap your mind around, and it doesn't help that there are many definitions of enumerators and iteratees with slightly different definitions. Here I will present a very specialized version of an enumerator, which should hopefully be easier to follow.
You might be wondering: what's a recursive enumerator? Just ignore the word for now, it will make sense when we discuss the non-recursive variant below.
Anyway, let's dive right in:
-- Note: this is a strict byte string type Enumerator a = (a -> ByteString -> IO (Either a a)) -> a -> IO (Either a a)
I appologize in advance for having slightly complicated this type from its usual form by making the return type IO (Either a a) instead of IO a, but it has some real world uses. I know it's possible to achieve the same result with the latter definition, but it's slightly more work. I'm not opposed to switching back to the former if there's enough desire.
So what exactly does this mean? An Enumerator is a data producer. When you call the enumerator, it's going to start handing off one bytestring at a time to the iteratee.
The iteratee is the first argument to the enumerator. It is a data consumer. To put it more directly: the application will be writing an iteratee which receives the raw request body and generates something with it, most likely a list of POST parameters.
So what's that a? It has a few names: accumulator, seed, or state. That's the way the iteratee is able to keep track of what it's doing. Each step along the way, the enumerator will collect the result of the iteratee and pass it in next time around.
And finally, what's going on with that Either? That's what allows us to have early termination. If the iteratee returns a Left value, it's a signal to the enumerator to stop processing data. A Right means to keep going. Similarly, when the enumerator finishes, it returns a Left to indicate that the iteratee requested early termination, and Right to indicate that all input was consumed.
To give a motivating example, here's a function that converts an enumerator into a lazy bytestring. Two things: firstly, this function is not written efficiently, it's meant to be easy to follow. More importantly, this lazy bytestring is not exactly lazy: the entire value must be read into memory. If we were two convert this in reality to a lazy bytestring, we would want to use lazy IO so reduce memory footprint. However, as Nicolas Pouillard pointed out to me, the only way to do this involes forkIO.
import Network.Wai import qualified Data.ByteString as S import qualified Data.ByteString.Lazy as L import Control.Applicative type Iteratee a = a -> S.ByteString -> IO (Either a a) toLBS :: Enumerator [S.ByteString] -> IO L.ByteString toLBS e = L.fromChunks . reverse . either id id <$>
As this post is already longer than I'd hoped for, I'll skip an explanation and to pros/cons:
Pros
- Space efficient and deterministic.
- Server is the caller, makes it easier to write.
- No need for IORef/MVar at all.
Cons
- Application is the callee, which is more difficult to write. However, this can be mitigated by having a single package which does POST parsing from an enumerator.
- Cannot be (simply) translated into a source or lazy bytestring. Unless someone can show otherwise, you need to start the enumerator is a separate thread and then use MVars or Chans to pass the information back. On top of that, you then need to be certain to use up all input, or else you will have a permanently locked thread.
While I think this is a great interface for the response body, and I've already implemented working code on top of this, I'm beginning to think we should reconsider going this route.
Non-recursive enumerator
The inspiration for this approach comes directly from a paper by Oleg. I found it easier to understand what was going on once I specialized the types Oleg presents, so I will be doing the same here. I will also do a little bit of renaming, so appologies in advance.
The basic distinction between this and a recursive enumerator is that the latter calls itself after calling the iteratee, while the former is given a function to call.
I'm not going to go into a full discussion of this here, but I hope to make another post soon explaining exactly what's going on (and perhaps deal with some of the cons).
type Enumerator a = RecEnumerator a -> RecEnumerator a type RecEnumerator a = Iteratee a -> a -> IO (Either a a) type Iteratee a = a -> B.ByteString -> IO (Either a a)
Pros
- Allows creation of the source (Oleg calls it a cursor) interface- and thus lazy byte string- without forkIO.
- Space efficient and deterministic.
Cons
- I think it's significantly more complicated than the other approaches, though that could just be the novelty of it.
- It still requires use of an IORef/MVar to track state. I have an idea of how to implement this without that, but it is significantly more complex.
Conclusion
Well, the conclusion for me is I'm beginning to lean back towards the Source interface. It's especially tempting to try out the source variant I mention, since that would eliminate the need for IORef/MVar. I'd be interested to hear what others have to say though. | http://www.yesodweb.com/blog/2010/02/request-body-interfaces | CC-MAIN-2013-48 | refinedweb | 1,594 | 63.49 |
New Book Content
September 23, 2011
Michael Snoyman
Joins
As we said in the introduction to this chapter, Persistent is non-relational: it works perfectly well with either SQL or non-SQL, and there is nothing inherent in its design which requires relations. On the other hand, this very chapter gave advice on modeling relations. What gives?
Both statements are true: you're not tied to relations, but they're available if you want to use them. And when the time comes, Persistent provides you with the tools to easily create efficient relational queries, or in SQL terms: table joins.
To play along with our existing no-SQL slant, the basic approach to doing joins in Persistent does not actually use any joins in SQL. This means that a Persistent join will work perfectly well with MongoDB, even though Mongo doesn't natively support joins. However, when dealing with a SQL database, most of the time you'll want to use the database's join abilities. And for this, Persistent provides an alternate module that works just that way. (Obviously, that module is incompatible with Mongo.)
The best part? These two modules have an identical API. All you have to do is swap out which runJoin function you import, and the behavior changes as well.
So how does this joining work? Let's look at a one-to-many relationship, such as a car/owner example above. Every car has an owner, and every owner has zero or more cars. The Database.Persist.Query.Join module provides a datatype, SelectOneMany, that contains a bunch of join settings, such as how to sort the owners (somOrderOne) and how to filter the cars (somFitlterMany).
In addition, there is a selectOneMany function, which will fill in defaults for all the settings except two. This function needs to be told how to filter the cars based on an owner, and how to determine the owner from a car value.
When you run a SelectOneMany, it will return something with a bit of a crazy type signature:
[((PersonId, Person), [(CarId, Car)])]. This might look intimidating, but lets
simplify it just a bit:
type PersonPair = (PersonId, Person) type CarPair = (CarId, Car) type Result = [(PersonPair, [CarPair])]
In other words, all this means is a grouped list of people to their cars.
What happens if a person doesn't have a car? By default, they won't show up in the output, though you can override this with the somIncludeNoMatch record. The default behavior matches the behavior of a SQL inner join. Overriding this matches the behavior of a SQL left join.
One other note: while the somOrderOne field is optional, you'll almost always want to provide it. Without it, there is no guarantee that the cars will be grouped appropriately. You might end up with multiple records for a single person.
{-# LANGUAGE TypeFamilies, TemplateHaskell, MultiParamTypeClasses, GADTs, QuasiQuotes, OverloadedStrings, FlexibleContexts #-} import Database.Persist import Database.Persist.Sqlite import Database.Persist.TH import Database.Persist.Query.Join (SelectOneMany (..), selectOneMany) import Control.Monad.IO.Class (liftIO) -- We'll use the SQL-enhanced joins. If you want the in-application join -- behavior instead, just import runJoin from Database.Persist.Query.Join import Database.Persist.Query.Join.Sql (runJoin) share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persist| Person name String Car owner PersonId name String |] main :: IO () main = withSqliteConn ":memory:" $ runSqlConn $ do runMigration migrateAll bruce <- insert $ Person "Bruce Wayne" insert $ Car bruce "Bat Mobile" insert $ Car bruce "Porsche" -- this could go on a while peter <- insert $ Person "Peter Parker" -- poor Spidey, no car logan <- insert $ Person "James Logan" -- Wolverine insert $ Car logan "Harley" britt <- insert $ Person "Britt Reid" -- The Green Hornet insert $ Car britt "The Black Beauty" results <- runJoin (selectOneMany (CarOwner <-.) carOwner) { somOrderOne = [Asc PersonName] } liftIO $ printResults results printResults :: [(Entity Person, [Entity Car])] -> IO () printResults = mapM_ goPerson where goPerson :: (Entity Person, [Entity Car]) -> IO () goPerson ((Entity _personid person), cars) = do putStrLn $ personName person mapM_ goCar cars putStrLn "" goCar :: (Entity Car) -> IO () goCar (Entity _carid car) = putStrLn $ " " ++ carName car
Monadic Forms
Often times, a simple form layout is adequate, and applicative forms excel at this approach. Sometimes, however, you'll want to have a more customized look to your form.
For these use cases, monadic forms fit the bill. They are a bit more verbose than their applicative cousins, but this verbosity allows you to have complete control over what the form will look like. In order to generate the form above, we could code something like this.
{-# LANGUAGE OverloadedStrings, TypeFamilies, QuasiQuotes, TemplateHaskell, MultiParamTypeClasses #-} import Yesod import Control.Applicative import Data.Text (Text) data Monadic = Monadic mkYesod "Monadic" [parseRoutes| / RootR GET |] instance Yesod Monadic instance RenderMessage Monadic FormMessage where renderMessage _ _ = defaultFormMessage data Person = Person { personName :: Text, personAge :: Int } deriving Show personForm :: Html -> MForm Monadic Monadic (FormResult Person, Widget) personForm extra = do (nameRes, nameView) <- mreq textField "this is not used" Nothing (ageRes, ageView) <- mreq intField "neither is this" Nothing let personRes = Person <$> nameRes <*> ageRes let widget = do toWidget [lucius| ##{fvId ageView} { width: 3em; } |] [whamlet| #{extra} <p> Hello, my name is # ^{fvInput nameView} \ and I am # ^{fvInput ageView} \ years old. # <input type=submit |] return (personRes, widget) getRootR :: Handler RepHtml getRootR = do ((res, widget), enctype) <- runFormGet personForm defaultLayout [whamlet| <p>Result: #{show res} <form enctype=#{enctype}> ^{widget} |] main :: IO () main = warpDebug 3000 Monadic
Similar to the applicative
areq, we use
mreq for monadic forms. (And yes, there's also
mopt for optional fields.) But there's a big difference:
mreq gives us back a pair of values. Instead of hiding away the
FieldView value and
automatically inserting it into a widget, we get the control to insert it as we see
fit.
FieldView has a number of pieces of information. The most
important is
fvInput, which is the actual form field. In this example,
we also use
fvId, which gives us back the HTML
id
attribute of the input tag. In our example, we use that to specify the width of the
field.
You might be wondering what the story is with the "this is not used" and
"neither is this" values.
mreq takes a
FieldSettings as its second argument. Since
FieldSettings
provides an
IsString instance, the strings are essentially expanded by
the compiler
to:
In the case of applicative forms, theIn the case of applicative forms, thefromString "this is not used" == FieldSettings { fsLabel = "this is not used" , fsTooltip = Nothing , fsId = Nothing , fsName = Nothing , fsClass = [] }
fsLabeland
fsTooltipvalues are used when constructing your HTML. In the case of monadic forms, Yesod does not generate any of the "wrapper" HTML for you, and therefore these values are ignored.
The other interesting bit is the
extra value.
GET forms include an extra field to indicate that they have been
submitted, and
POST forms include a security tokens to prevent CSRF
attacks. If you don't include this extra hidden field in your form, Yesod will not
accept it.
Other than that, things are pretty straight-forward. We create our
personRes value by combining together the
nameRes
and
ageRes values, and then return a tuple of the person and the
widget. And in the
getRootR function, everything looks just like an
applicative form. In fact, you could swap out our monadic form with an applicative one
and the code would still work.
Input forms
Applicative and monadic forms handle both the generation of your HTML code and the parsing of user input. Sometimes, you only want to do the latter, such as when there's an already-existing form in HTML somewhere, or if you want to generate a form dynamically using Javascript. In such a case, you'll want input forms.
These work mostly the same as applicative and monadic forms, with some differences:
- You use
runInputPostand
runInputGet.
- You use
ireqand
iopt. These functions now only take two arguments: the field type and the name (i.e., HTML
nameattribute) of the field in question.
- After running a form, it returns the value. It doesn't return a widget or an encoding type.
- If there are any validation errors, the page returns an "invalid arguments" error page.
You can use input forms to recreate the previous example. Note, however, that the input version is less user friendly. If you make a mistake in an applicative or monadic form, you will be brought back to the same page, with your previously entered values in the form, and an error message explaning what you need to correct. With input forms, the user simply gets an error message.
{-# LANGUAGE OverloadedStrings, TypeFamilies, QuasiQuotes, TemplateHaskell, MultiParamTypeClasses #-} import Yesod import Control.Applicative import Data.Text (Text) data Input = Input mkYesod "Input" [parseRoutes| / RootR GET /input InputR GET |] instance Yesod Input instance RenderMessage Input FormMessage where renderMessage _ _ = defaultFormMessage data Person = Person { personName :: Text, personAge :: Int } deriving Show getRootR :: Handler RepHtml getRootR = defaultLayout [whamlet| <form action=@{InputR}> <p> My name is # <input type=text name=name> \ and I am # <input type=text name=age> \ years old. # <input type=submit |] getInputR :: Handler RepHtml getInputR = do person <- runInputGet $ Person <$> ireq textField "name" <*> ireq intField "age" defaultLayout [whamlet|<p>#{show person}|] main :: IO () main = warpDebug 3000 Input
Custom fields
The fields that come built-in with Yesod will likely cover the vast majority of your
form needs. But occasionally, you'll need something more specialized. Fortunately, you can
create new forms in Yesod yourself. The
Field datatype has two records:
fieldParse takes a list of values submitted by the user and returns one of
three results:
- An error message saying validation failed.
- The parsed value.
- Nothing, indicating that no data was supplied.
That last case might sound surprising: shouldn't Yesod automatically know that no information is supplied when the input list is empty? Well, no actually. Checkboxes, for instance, indicate an unchecked state by sending in an empty list.
Also, what's up with the list? Shouldn't it be a
Maybe? Well, that's
also not the case. With grouped checkboxes and multi-select lists, you'll have multiple widgets
with the same name. We also use this trick in our example below.
The second record is
fieldView, and it renders a widget to display to
the user. This function has four arguments: the
id attribute, the
name attribute, the result and a
Bool indicating if the field
is required.
What did I mean by result? It's actually an
Either, giving either
the unparsed input (when parsing failed) or the successfully parsed value.
intField is a great example of how this works. If you type in 42, the value of result will be
Right 42. But
if you type in turtle, the result will be
Left
"turtle". This lets you put in a value attribute on your input tag that will give the
user a consistent experience.
As a small example, we'll create a new field type that is a password confirm field. This field has two text inputs- both with the same name attribute- and returns an error message if the values don't match. Note that, unlike most fields, it does not provide a value attribute on the input tags, as you don't want to send back a user-entered password in your HTML ever.
passwordConfirmField :: Field sub master Text passwordConfirmField = Field { fieldParse = \rawVals -> case rawVals of [a, b] | a == b -> return $ Right $ Just a | otherwise -> return $ Left "Passwords don't match" [] -> return $ Right Nothing _ -> return $ Left "You must enter two values" , fieldView = \idAttr nameAttr _ eResult isReq -> [whamlet| <input id=#{idAttr} name=#{nameAttr} type=password> <div>Confirm: <input id=#{idAttr}-confirm name=#{nameAttr} type=password> |] } getRootR :: Handler RepHtml getRootR = do ((res, widget), enctype) <- runFormGet $ renderDivs $ areq passwordConfirmField "Password" Nothing defaultLayout [whamlet| <p>Result: #{show res} <form enctype=#{enctype}> ^{widget} <input type=submit |] | http://www.yesodweb.com/blog/2011/09/new-book-content | CC-MAIN-2015-27 | refinedweb | 1,943 | 53.71 |
# What's new in C# 10: overview
This article covers the new version of the C# language - C# 10. Compared to C# 9, C# 10 includes a short list of enhancements. Below we described the enhancements and added explanatory code fragments. Let's look at them.
### Enhancements of structure types
#### Initialization of field structure
Now you can set initialization of fields and properties in structures:
```
public struct User
{
public User(string name, int age)
{
Name = name;
Age = age;
}
string Name { get; set; } = string.Empty;
int Age { get; set; } = 18;
}
```
#### Parameterless constructor declaration in a structure type
Beginning with C# 10, you can declare a parameterless constructor in structures:
```
public struct User
{
public User()
{
}
public User(string name, int age)
{
Name = name;
Age = age;
}
string Name { get; set; } = string.Empty;
int Age { get; set; } = 18;
}
```
**Important.** You can use parameterless constructors only if all fields and/or properties have initializers. For example, if you do not set the *Age* initializer, a compiler will issue an error:
*Error CS0843: Auto-implemented property 'User.Age' must be fully assigned before control is returned to the caller.*
### Applying the with expression to a structure
Before, you could use the *with* expression with records. With C#10, you can use this expression with structures. Example:
```
public struct User
{
public User()
{
}
public User(string name, int age)
{
Name = name;
Age = age;
}
public string Name { get; set; } = string.Empty;
public int Age { get; set; } = 18;
}
User myUser = new("Chris", 21);
User otherUser = myUser with { Name = "David" };
```
It is clear that the property that we are changing (in this case, the *Name* field) must have a public access modifier.
### Global using
Beginning with C# 10, you can use the *using* directive across an entire project. Add the *global* keyword before the *using* phrase:
```
global using "Library name"
```
Thus, the *using* directive allows you not to duplicate the same namespaces in different files.
**Important.** Use *global using* construction BEFORE code lines that include *using* without *global* keyword. Example:
```
global using System.Text;
using System;
using System.Linq;
using System.Threading.Tasks;
// Correct code fragment
```
Otherwise:
```
using System;
using System.Linq;
using System.Threading.Tasks;
global using System.Text;
// Error CS8915
// A global using directive must precede
// all non-global using directives.
```
If you wrote the namespace that was previously written with the *global* keyword, the IDE will warn you (*IDE: 0005: Using directive is unnecessary*).
### File-scoped namespace
Sometimes you need to use the namespace within the entire file. This action may shift the tabs to the right. To avoid this problem, you can now use the *namespace* keyword. Write the *namespace* keyword without braces:
```
using System;
using System.Linq;
using System.Threading.Tasks;
namespace TestC10;
public class TestClass
{
....
}
```
Before C# 10, it was necessary to keep *namespace* braces open on the entire file:
```
using System;
using System.Linq;
using System.Threading.Tasks;
namespace TestC10
{
public class TestClass
{
....
}
}
```
Clearly, you can declare only one *namespace* in the file. Accordingly, the following code fragment is incorrect:
```
namespace TestC10;
namespace MyDir;
// Error CS8954
// Source file can only contain
// one file-scoped namespace declaration
```
as well as the following piece of code:
```
namespace TestC10;
namespace MyDir
{
....
}
// Error CS8955
// Source file can not contain both
// file-scoped and normal namespace declarations.
```
### Record enhancements
#### The class keyword
C# 10.0 introduces the optional keyword - *class*. The class keyword helps understand whether a record is of a reference type.
Therefore, the two following records are identical:
```
public record class Test(string Name, string Surname);
public record Test(string Name, string Surname);
```
#### Record structs
Now it's possible to create record structs:
```
record struct Test(string Name, string Surname)
```
By default, the properties of the *record struct* are mutable, unlike the standard *record* that have *init* modifier.
```
string Name { get; set; }
string Surname { get; set; }
```
We can set the *readonly* property to the record struct. Then access to the fields will be equivalent to the standard record:
```
readonly record struct Test(string Name, string Surname);
```
where the properties are written as:
```
string Name { get; init; }
string Surname { get; init; }
```
The equality of two record struct objects is similar to the equality of two structs. Equality is true if these two objects store the same values:
```
var firstRecord = new Person("Nick", "Smith");
var secondRecord = new Person("Robert", "Smith");
var thirdRecord = new Person("Nick", "Smith");
Console.WriteLine(firstRecord == secondRecord);
// False
Console.WriteLine(firstRecord == thirdRecord);
// True
```
Note that the compiler doesn't synthesize a copy constructor for record struct types. If we create a copy constructor and use the *with* keyword when initializing a new object, then the assignment operator will be called instead of the copy constructor (as it happens when working with the *record class*).
#### Seal the ToString() method on records
As my colleague wrote in the article on the [enhancements for C# 9](https://pvs-studio.com/en/blog/posts/csharp/0860/) , records have the overridden *toString* method. There is an interesting point about inheritance as related to this method. The child objects cannot inherit the overriden *toString* method from the parent record. C# 10 introduces the *sealed* keyword so that the child objects can inherit the *ToString* method. This keyword prevents the compiler from synthesizing the *ToString* implementation for any derived records. Use the following keyword to override the *ToString* method:
```
public sealed override string ToString()
{
....
}
```
Let's create a record that tries to override the *toString* method:
```
public record TestRec(string name, string surname)
{
public override string ToString()
{
return $"{name} {surname}";
}
}
```
Now let's inherit the second record:
```
public record InheritedRecord : TestRec
{
public InheritedRecord(string name, string surname)
:base(name, surname)
{
}
}
```
Now let's create an instance of each record and type the result to the console:
```
TestRec myObj = new("Alex", "Johnson");
Console.WriteLine(myObj.ToString());
// Alex Johnson
InheritedRecord mySecObj = new("Thomas", "Brown");
Console.WriteLine(mySecObj.ToString());
// inheritedRecord { name = Thomas, surname = Brown}
```
As we can see, the *InheritedRecord* did not inherit the *toString* method.
Let's slightly change the *TestRec* record and add the *sealed* keyword:
```
public record TestRec(string name, string surname)
{
public sealed override string ToString()
{
return $"{name} {surname}";
}
}
```
Now let's re-create two instances of the records and type the result to the console:
```
TestRec myObj = new("Alex", "Johnson");
Console.WriteLine(myObj.ToString());
// Alex Johnson
InheritedRecord mySecObj = new("Thomas", "Brown");
Console.WriteLine(mySecObj.ToString());
// Thomas Brown
```
And.. woohoo! The *InheritedRecord* inherited the *toString* method from the *TestRec*.
### Easier access to nested fields and properties of property patterns
C# 8.0 introduced the property pattern that allows you to easily match on fields and/or properties of an object with the necessary expressions.
Before, if you needed to check any nested property, the code could look too cumbersome:
```
....{property: {subProperty: pattern}}....
```
With C#10, you just need to add the dots between the properties:
```
....{property.subProperty: pattern}....
```
Let's see the change using the example of the method of taking the first 4 symbols of the name.
```
public record TestRec(string name, string surname);
string TakeFourSymbols(TestRec obj) => obj switch
{
// old way:
//TestRec { name: {Length: > 4} } rec => rec.name.Substring(0,4),
// new way:
TestRec { name.Length: > 4 } rec => rec.name.Substring(0,4),
TestRec rec => rec.name,
};
```
The example above shows that the new type of property access is simpler and clearer than before.
### Constant interpolated strings
Before, this feature was not supported. C# 10 allows you to use string interpolation for constant strings:
```
const string constStrFirst = "FirstStr";
const string summaryConstStr = $"SecondStr {constStrFirst}";
```
**Interesting fact.** This change relates only to string interpolation for constant strings, i.e the addition of a constant character is not allowed:
```
const char a = 'a';
const string constStrFirst = "FirstStr";
const string summaryConstStr = $"SecondStr {constStrFirst} {a}";
// Error CS0133
// The expression being assigned to
// 'summaryConstStr' must be constant
```
### Assignment and declaration in same deconstruction
In earlier versions of C#, a deconstruction could assign values to EITHER declared variables (all are declared), OR variables that we initialize during calling (all are NOT declared):
```
Car car = new("VAZ 2114", "Blue");
var (model, color) = car;
// Initialization
string model = string.Empty;
string color = string.Empty;
(model, color) = car;
// Assignment
```
The new version of the language allows simultaneous use of both previously declared and undeclared variables in deconstruction:
```
string model = string.Empty;
(model, var color) = car;
// Initialization and assignment
```
The following error occurred in the C#9 version:
*Error CS8184: A deconstruction cannot mix declarations and expressions on the left-hand-side.*
### Conclusion
As mentioned earlier, the list of changes is not as large as in the C#9 version. Some changes simplify the work, while others provide previously unavailable features. The C# is still evolving. We're looking forward for new updates of the C# language.
Haven't read about new C# 9 features yet? Check them out in our separate [article](https://pvs-studio.com/en/blog/posts/csharp/0860/).
If you want to see the original source, you can read the [Microsoft documentation](https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-10). | https://habr.com/ru/post/584486/ | null | null | 1,499 | 55.44 |
Have problem getting the correct date format while print
- kian hong Tan last edited by
Hello, this is the code I use to print the log.
def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt))
The output:
Starting Portfolio Value: 100000.00
2020-05-04, Close, 8871.96
2020-05-05, Close, 8867.83
2020-05-05, Close, 8901.65
2020-05-05, Close, 8858.35
2020-05-05, Close, 8879.58
However, I wanted the date to be shown as 2020-05-04 16:30:00 (%Y-%m-%d %H:%M:%S), like how I specified for Data Feed, not just showing date only. Also, how can I change the date to UTC+8?
Many Thanks.
@kian-hong-Tan said in Have problem getting the correct date format while print:
dt = dt or self.datas[0].datetime.date(0)
When setting dates in backtrader, datetime is used. The first part:
self.datas[0].datetime
gets datetime object. The ending will give the format of date, time, or datetime. The endings are:
datetime(0) # format is %Y-%m-%d %H:%M:%S date(0) # format is %Y-%m-%d time(0) # format is %H:%M:%S
As a standard procedure, at the beginning of my strategy class next() method, as well as in analyzers and indicators where needed, I put the following code:
# Current bar datetime, date, and time. dt = self.data.datetime.datetime() date = self.data.datetime.date() time = self.data.datetime.time()
This allows me to grab time and date information in an easy, concise, and consistent way during programming.
- kian hong Tan last edited by | https://community.backtrader.com/topic/2856/have-problem-getting-the-correct-date-format-while-print | CC-MAIN-2020-50 | refinedweb | 285 | 69.28 |
snd_pcm_write()
Transfer PCM data to playback channel
Synopsis:
#include <sys/asoundlib.h> ssize_t snd_pcm_write( snd_pcm_t *handle, const void *buffer, size_t size );
Arguments:
- handle
- The handle for the PCM device, which you must have opened by calling snd_pcm_open() or snd_pcm_open_preferred() .
- buffer
- A pointer to a buffer that holds the data to be written.
- size
- The amount of data to write, in bytes.
Library:
libasound.so
Use the -l asound option to qcc to link against this library.
Description:
The snd_pcm_write() function writes samples to the device, which must be in the proper format specified by snd_pcm_channel_prepare() or snd_pcm_playback_prepare() .
This function may suspend a process if blocking mode is active (see snd_pcm_nonblock_mode() ). and no space is available in the device's buffers.
When the subdevice is in blocking.
Returns:
A positive value that represents the number of bytes that were successfully written to the device if the playback was successful, or an error value if an error occurred.
Errors:
- -EAGAIN
- Try again later. The subchannel is opened nonblock.
- -EINVAL
- One of the following:
- The handle is invalid.
- The buffer argument is NULL, but the size is greater than zero.
- The size is negative.
- -EIO
- One of:
- The channel isn't in the prepared or running state.
- In SND_PCM_MODE_BLOCK mode, the size isn't an even multiple of the frag_size member of the snd_pcm_channel_setup_t structure.
- -EWOULDBLOCK
- The write would have blocked (nonblocking write).
Classification:
QNX Neutrino | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_write.html | CC-MAIN-2021-17 | refinedweb | 233 | 67.86 |
XML Attribute Axis Property
Provides access to the value of an attribute for an XElement object or the first element in a collection of XElement objects.
You can use an XML attribute axis property to access the value of an attribute by name from an XElement object or the first element in a collection of XElement objects. You can retrieve an attribute value by name, or add a new attribute to an element by specifying a new name preceded by the @ identifier.
When you refer to an XML attribute using the @ identifier, the attribute value is returned as a string and you do not need to explicitly specify the Value property.
The naming rules for XML attributes differ from the naming rules for Visual Basic identifiers. To access an XML attribute that has a name that is not a valid Visual Basic identifier, enclose the name in angle brackets (< and >).
XML Namespaces
The name in an attribute axis property can use only XML namespace prefixes declared globally by using the Imports statement. It cannot use XML namespace prefixes declared locally within XML element literals. For more information, see Imports Statement (XML Namespace).
The following example shows how to get the values of the XML attributes named type from a collection of XML elements that are named phone.
' Topic: XML Attribute Axis Property Dim phones As XElement = _ <phones> <phone type="home">206-555-0144</phone> <phone type="work">425-555-0145</phone> </phones> Dim phoneTypes As XElement = _ <phoneTypes> <%= From phone In phones.<phone> _ Select <type><%= phone.@type %></type> _ %> </phoneTypes> Console.WriteLine(phoneTypes)
This code displays the following text:
<phoneTypes>
<type>home</type>
<type>work</type>
</phoneTypes>
The following example shows how to create attributes for an XML element both declaratively, as part of the XML, and dynamically by adding an attribute to an instance of an XElement object. The type attribute is created declaratively and the owner attribute is created dynamically.
This code displays the following text:
The following example uses the angle bracket syntax to get the value of the XML attribute named number-type, which is not a valid identifier in Visual Basic.
This code displays the following text:
Phone type: work
The following example declares ns as an XML namespace prefix. It then uses the prefix of the namespace to create an XML literal and access the first child node with the qualified name "ns:name".
This code displays the following text:
Phone type: home | http://msdn.microsoft.com/en-us/library/bb384755(v=vs.90).aspx | CC-MAIN-2014-15 | refinedweb | 411 | 50.57 |
In this article we can explore how to configure Email for Development Server. In real world development scenarios you need to work with Email enabled lists / workflows / code.
Scenario
Your customer reported a problem with Email sending from a web part you have developed. You need to ensure the piece of code works fine in your development machine. As there is no email server configured in your machine - How to test this code?
Solution
We can set up a Development Server with emailing enabled. Setting up Email for the Development Server along with a Receiver tool you can ensure that emailing code is working perfect.
Steps
Following are the steps involved:1. Configure outgoing SMTP server2. Set Email property for User Profiles3. Install smtp4dev tool4. Test Email CodePlease note that I am using a Windows 7 64-bit machine for working with this. The same configurations should work for Windows Server development machine.
Configure outgoing SMTP server
Open Central Administration and click on the System Settings category from the left.
In the appearing page click on the Configure outgoing e-mail settings link highlighted above. You should get the page given below.Enter the Outbound SMTP server as your machine name. Please enter a name instead of IP Address or localhost.Enter the From and Reply-To address.Click the OK button to save the changes.
Set Email property for User Profiles
As the next step we need to set the user profile property E-mail for testing the feature. You can set this through:1. My Profile of each user2. Central Administration for all usersLet us use the Central Administration way as we can set for multiple users. Open Central Administration > Manage service applications > Select User Profile Service Application > Click Manage button from toolbar.
In the appearing page click on the Manage User Profiles link as highlighted below:
In the appearing page search for user and from the result choose Edit menu item.
In the Edit profile page set the Work-email property and save changes.
Install smtp4dev tool
Now we can try installing the smtp4dev tool (a wonderful tool) that captures the Port 25 of your machine. You can download the tool from following location:
the Download button on the appearing page.
Run the downloaded file and you should see the following screen.
By default the tool started to listen on Port 25. You can minimize the tool and it is available in the system tray.Note Running another instance of the smtp4dev tool cannot listen to the same port 25. You need to invoke the previous copy of the tool from system tray to view any email messages.
Test Email Code
Now we are ready to test the email code. Start a new SharePoint Console application, change the project properties to .Net Framework 3.5 and replace the Program class content with following code.
using System.Collections.Specialized;
class Program
{
static void Main(string[] args)
{
using (SPSite site = new SPSite(""))
{
StringDictionary messageHeaders = new StringDictionary();
messageHeaders.Add("to", "to@server.com");
messageHeaders.Add("cc", "cc@server.com");
messageHeaders.Add("from", "from@server.com");
messageHeaders.Add("subject", "Email Subject");
messageHeaders.Add("content-type", "text/html");
SPUtility.SendEmail(site.OpenWeb(), messageHeaders, "Email Body");
}
}
}
For the time being we are using dummy email address. Try executing the code. You can see the email being captured by smtp4dev tool.
References
Summary
In this article we have explored how to configure email server for a development machine and use smtp4dev tool to capture the emails generated from the machine. You can use the same configuration to test other emails through Workflows, Web Parts etc. that are generated through SharePoint.Please note that here we are setting up a development machine email server and for configuring the actual email service you need to check the link from References section.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/kb/5669-configuring-email-for-development-server.aspx | CC-MAIN-2018-30 | refinedweb | 650 | 58.79 |
Dear all
I understand that this issue has been discussed for many many times, but I still can not pass through it... I am using parallel studio XE 2011 to build C++ as the main program calling fortran. For testing, my code is simple:
#include
<stdio.h>
#ifdef _cplusplus
extern "C" {
#endif
void _stdcall THERDF(int *parent);
#ifdef _cplusplus
}
#endif
void main(void)
{
int parent=10;
THERDF(&parent);
}
subroutine THERDF(parent)
implicit none
integer:: parent
print*, parent
return
end subroutine THERDF
I followed the step to setup two projects in one solution
" - Right click on the C++ project and select Dependencies. Check the box for the Fortran project to make it a dependent of the C++ project. - Right click on the Fortran project and select Properties > Fortran > Libraries. Set the property "Disable default library search rules to "No". - Make sure that the run-time library types match between the C++ and Fortran projects (for example, if C++ is using Multithreaded DLL, Fortran must be too
There is a one-time configuration of C++ you need to make to link in Fortran code - see the instructions at Configuring Visual Studio for Mixed-Language Applications
Now you can build the Solution and it should build and link properly.
I notice you have STDCALL routines being called. I assume those are the Fortran routines. If you're coming from Compaq Visual Fortran, you'll probably want to set the Calling Convention: CVF option in the Fortran project.
for library directiries: $(ifort_compilervv)compiler\lib\ia32;$(LibraryPath)
I got the error code
1>Liquidus.obj : error LNK2019: unresolved external symbol "void __stdcall THERDF(int *)" (?THERDF@@YGXPAH@Z) referenced in function _main
1>C:\fortran\Nomads\Liquidus\liquidus\Debug\liquidus.exe : fatal error LNK1120: 1 unresolved externals
what did I miss here? Thank you.
Best,
SY Li | https://software.intel.com/en-us/forums/topic/326593 | CC-MAIN-2015-11 | refinedweb | 301 | 54.12 |
RSA_public_encrypt, RSA_private_decrypt - RSA public key
cryptography
libcrypto, -lcrypto
#include <openssl/rsa.h>
int RSA_public_encrypt(int flen, unsigned char *from,
unsigned char *to, RSA *rsa, int padding);
int RSA_private_decrypt(int flen,..
RSA_public_encrypt() returns the size of the encrypted
data (i.e., RSA_size(rsa)). RSA_private_decrypt() returns
the size of the recovered plaintext.
On error, -1 is returned; the error codes can be obtained
by ERR_get_error(3).
SSL, PKCS #1 v2.0
openssl_err(3), openssl_rand(3), openssl_rsa(3),
RSA_size(3)
The RSA_PKCS1_RSAref(3) method supports only the
RSA_PKCS1_PADDING mode.
The padding argument was added in SSLeay 0.8.
RSA_NO_PADDING is available since SSLeay 0.9.0, OAEP was
added in OpenSSL 0.9.2b.
2001-04-12 0.9.6g RSA_public_encrypt(3) | https://nixdoc.net/man-pages/NetBSD/man3/RSA_public_encrypt.3.html | CC-MAIN-2022-33 | refinedweb | 119 | 53.27 |
Sometime ago I had been wrestling through the viewstate and blogged on it. On a page with a datagrid the viewstate can be enormous. As the viewstate travels over the wire on every roundtrip this can be quite a problem in a situation where the bandwidth is limited. A lot of data in a datagrid is reread on every roundtrip, having it also in the viewstate is quite a waste. But some of the properties of a datagrid, like the selectedindex have to be preserved over the roundtrips. The viewstate can be disabled on a control basis. Which will also disable the viewstate of all controls owned by the control. Disabling it on a datagrid will also disable it for the controls in a template column. Disabling it on the page will disable it for all controls on the page.
The problem with the viewstate of the datagrid is that you need it for some integers and end up with one which stores an entire grid. In a comment on the old post Simon asks if there is an easy way out of this. Something like the Whideby solution where you can disable the viewstate but preserve the main properties like SelectedIndex. There is, it takes some lines of code.
Disable the viewstate of the datagrid (and other controls), but do not disable the viewstate of the page itself. You use the latter to save your properties over roundtrips. Something like this :
const string mySelIndex = "MySelIndex";private int mySelectedIndex{ get { if (this.ViewState[mySelIndex] == null) return -1; else return (int)this.ViewState[mySelIndex]; } set { this.ViewState[mySelIndex] = value; }}
...
private void Page_Load(object sender, System.EventArgs e){ // Read data here DataGrid1.DataBind(); DataGrid1.SelectedIndex = this.mySelectedIndex;}
private void WebForm1_PreRender(object sender, System.EventArgs e){ this.mySelectedIndex = DataGrid1.SelectedIndex;}
The page has a new private property mySelectedInded. Which uses the viewstate of the page itself to store the index. In the page load the index is read, in the page prerender it is saved. The page still does have a viewstate but you will see that it is very very small. Get the idea ? | http://codebetter.com/blogs/peter.van.ooijen/archive/2004/12/15/37106.aspx | crawl-002 | refinedweb | 352 | 67.35 |
The easiest way to add 2 matrices in python is to loop over them and add the elements one by one. For example,
X = [[1,2,3], [4,5,6], [7,8,9]] Y = [[9,8,7], [6,5,4], [3,2,1]] result = [[0,0,0], [0,0,0], [0,0,0]] for i in range(len(X)): for j in range(len(X[0])): result[i][j] = X[i][j] + Y[i][j] for r in result: print(r)
This will give the output:
[10, 10, 10] [10, 10, 10] [10, 10, 10]
You can also use the numpy module, which has support for this.
import numpy as np X = np.matrix([[1, 2], [3, 4]]) Y = np.matrix([[2, 2], [2, 2]]) result = X + Y print(result)
This will give the output:
matrix([[3, 4], [5, 6]]) | https://www.tutorialspoint.com/How-to-Add-Two-Matrices-using-Python | CC-MAIN-2022-21 | refinedweb | 140 | 75.74 |
Re: Share .cpp and .h along projects
- From: "Doug Harrison [MVP]" <dsh@xxxxxxxx>
- Date: Mon, 20 Aug 2007 14:52:24 -0500
On Mon, 20 Aug 2007 09:57:14 -0500, "Ben Voigt [C++ MVP]"
<rbv@xxxxxxxxxxxxx> wrote:
"Doug Harrison [MVP]" <dsh@xxxxxxxx> wrote in message
news:nefcc3hh5er85j8hdrklli59i553ve10tu@xxxxxxxxxx
On Fri, 17 Aug 2007 15:14:50 -0500, "Ben Voigt [C++ MVP]"
<rbv@xxxxxxxxxxxxx> wrote:
volatile std::vector* g_sharedVector;
...
{
std::vector* pvector = NULL;
// this is a spin-wait, efficient on SMP, but on a single processor
system a yield should be inserted
while (pvector == NULL) {
pvector = InterlockedExchangePointerAcquire(&g_sharedVector,
NULL);
}
// use pvector in any way desired
With this sort of busy-waiting, it's more important than ever that "any
way
desired" translate to "as short a time as possible".
Agreed. But that's true for any access to synchronized resources.
It's especially true when you present a method that uses 100% of the CPU
for an indefinite period of time.
// with VC2005, can use "g_sharedVector = pvector;" instead
InterlockedExchangePointerRelease(&g_sharedVector, pvector);
}
The sensible thing is to get rid of the pointers and use a
CRITICAL_SECTION
along with the vector object instead of this clumsy, inefficient, obscure,
limited alternative to the way people have been doing things since the
beginning of Windows NT.
That's a higher level of abstraction for doing exactly the same thing.
Common sense dictates using the highest-level abstraction available unless
there's a good, specific reason to do otherwise. If, when someone says
"mutex", as I repeatedly did, you think "InterlockedXXX", well, it's just
hard to understand why.
BTW, why _exactly_ did you use volatile in your declaration of
g_sharedVector? (Based on the declaration of
InterlockedExchangePointerAcquire, it shouldn't even compile.)
No answer? I really would like to hear what you think volatile accomplishes
here.
The compiler DOES apply those optimizations. If the code doesn't make
proper use of volatile and memory barriers to ensure that the correct data
is seen in other threads, then the code has a thread safety issue, not the
compiler.
I'll say it again:
<q>
You can't require people to use volatile on top of synchronization.
Synchronization needs to be sufficient all by itself, and it is in any
compiler useful for multithreaded programming. All you need is
synchronization.
</q>
As you've become fixated on memory barriers, I'll add that using
synchronization objects takes care of whatever memory barriers may be
needed. Most multithreaded programming is done in terms of mutexes and the
like, and thinking about memory barriers is not necessary when using
mutexes and the like.
Splitting functions into separate DLLs to prevent the optimizations is not
the right thing to do. It is fragile. For example, at one time simply
using two different compilation units within the same module would prevent
optimization. Now there is Link-Time Code Generation. Also, the .NET JIT
compiler does cross-assembly inlining and even native compilation can make
deductions and optimize across external calls using aliasing analysis.
As I've said a couple of times by now, "If the compiler could look into
the
DLL, there would have to be some way to explicitly indicate that
lock/unlock are unsafe for this optimization." Do you understand I'm not
saying that the DLL approach is the be-all, end-all solution to the
problem? Do you understand what I meant when I said, "By putting
WaitForSingleObject, ReleaseMutex, and others in opaque system DLLs,
correct compiler behavior for MT programming WRT these operations
essentially comes for free." That means as long as the compiler doesn't
perform optimizations unsafe for multithreading around calls to these
functions, it does not need to define a way to mark their declarations
unsafe. It also means you don't have to use volatile, because the compiler
You keep making the same false claim. Here, let me show you a variant of
your code that is going to fail with an intelligent compiler, no peering
into the DLL necessary:
namespace {
// assume the address of x is never taken
int x; // needs to be volatile
mutex mx;
}
// None of these touch x.
void lock(mutex&);
void unlock(mutex&);
void g(int);
void f1()
{
lock(mx);
g(x);
unlock(mx);
lock(mx);
g(x);
unlock(mx);
}
void f2()
{
lock(mx);
++x;
unlock(mx);
}
As you stated "A compiler that can see into all these functions will observe
that none of lock, unlock, and g access x, so it can cache the value of x
across the mutex calls in f1." I tell you that because x has file-local
linkage and the address of x is not taken, aliasing analysis in current
compilers proves that none of lock, unlock, or g access x -- without seeing
into the functions.
And I'll tell you again, you're not thinking this through. As I've
explained several times already, for a compiler useful for multithreading
to apply this optimization, it would have to prove there is no way x is
reachable from lock/unlock. This means proving there is no way f2 can be
called as a result of calling lock/unlock. The compiler cannot prove this
without being able to see into lock/unlock. This is the basis for what I've
said about opaque DLLs.
To a large extent, this is not even a multithreading issue. It also applies
to single-threaded code.
(My example does assume that f1 and f2 are called sometime, somewhere. If
one of them isn't, it's not very interesting.)
Using DLLs to inhibit optimization is broken, Broken, BROKEN!
Then stop bellowing and (a) Demonstrate that it doesn't work, and (b) Show
why the compiler doesn't perform unsafe optimizations around
WFSO/ReleaseMutex, EnterCriticalSection/LeaveCriticalSection, etc.
Adding "volatile" to the declaration of x fixes the problem.
Except that you cannot require people to use volatile on top of
synchronization.
must assume these functions can affect observable behavior involving the
objects you want to needlessly declare volatile, which as I've already
noted, is a huge performance killer plus completely impractical to use for
class objects.
It is *not* a performance killer when used correctly. Look at my original
example above and note that pvector is not declared volatile, only the
shared pointer is. Within the critical section all optimizations are
possible.
Before you make claims about your use of "volatile", answer the question I
posed last time:
BTW, why _exactly_ did you use volatile in your declaration of
g_sharedVector? (Based on the declaration of
InterlockedExchangePointerAcquire, it shouldn't even compile.)
This is an important question for you to answer in detail.
Using "volatile" is the only way to make code robust in the face of
improving optimizing compilers, and as a bonus, it is part of the C++
standard.
That's really quite funny. The C++ Standard does not address
multithreading, and it was recognized long ago that volatile is not the
answer or even a significant part of the answer. You might begin to
understand these things I've been talking about if you'd take the advice I
gave you a couple of messages ago:
<q>
You should google this group as well as comp.programming.threads for past
discussions on "volatile".
</q>
I've read several of those threads, some of which are in the hundreds of
responses.
Gee, I wonder why they get to be so long? :) FWIW, I'm not the only MVP who
has said things like "volatile is neither necessary nor sufficient" for
multithreaded programming using synchronization objects like mutexes. Of
course, we're getting this from people like Butenhof, who played a big part
in the pthreads standard.
Part of the problem is that MS has yet to publish a formal set of memory
visibility rules like POSIX did years ago, or I would have pointed you to
that. This leaves me to argue from experience writing MT code in VC++ and
also the fact that it would be a colossal blunder not to follow the POSIX
rules, which specifically do not require volatile on top of
synchronization. Also, I cannot recall ever hearing of a bug resulting from
not using volatile on variables consistently accessed under the protection
of a locking protocol involving CRITICAL_SECTIONs, kernel mutexes, and
other Windows synchronization objects. I cannot recall any MS documentation
that says volatile must be used on top of synchronization. The MFC library
doesn't use volatile, nor does the CRT use volatile for things it protects
with CRITICAL_SECTIONs, such as FILE objects.
For all these reasons, I think I'm on pretty safe ground (to put it mildly)
when I say that volatile is not required when using synchronization. If you
still disagree, I ask you to produce a counter-example.
I think I'm on equally safe ground WRT what I've said about DLLs. If you
still disagree, I ask you to produce a counter-example.
--
Doug Harrison
Visual C++ MVP
.
- Follow-Ups:
- Re: Share .cpp and .h along projects
- From: Ben Voigt [C++ MVP]
- References:
-]
- Prev by Date: Re: Erroneous for-loop variable used with /Zc:forScope
- Next by Date: Re: Erroneous for-loop variable used with /Zc:forScope
- Previous by thread: Re: Share .cpp and .h along projects
- Next by thread: Re: Share .cpp and .h along projects
- Index(es): | http://www.tech-archive.net/Archive/VC/microsoft.public.vc.language/2007-08/msg00691.html | crawl-002 | refinedweb | 1,555 | 59.84 |
A web service in general is a way of exposing the properties and methods through the Internet In other words, it's an URL-addressable resource that programmatically returns information to clients who want to use it. A web service is based on XML standards and has been implemented by different vendors like IBM, Sun Microsystems, CapeClear etc. Currently there is no official body, such as the W3C who have under taken the job of defining this fast evolving technology.Web services have been foreseen as the future of computing over the web and are more pronounced in the .NET framework owing to its ease in development as well as consumption with or even without VisualStudio.NET. A webservice works on Simple Object Access Protocol (SOAP). Every call to the webmethod from the client's machine is sent in the form of a SOAP envelope and result from the server is sent back in SOAP containing the result.
The facility of language interoperability that .NET offers within its Common Language Runtime (CLR) offers a new dimension to using web services. Owing to these attractive features, .NET is predicted to take command of the client side computing market in the years to come. We can for sure get a feel on the nature of computing where you could develop and use applications in languages of your choice, at least in .NET environment.
In this article I plan to elaborate more on this aspect by
Although the application might seem complicated, it is fairly simple to achieve this objective. For the sake of better understanding, I have illustrated simple and straightforward examples.
Let's march in to out first step of our objective.
Creating a simple Web service in C#The webservice application provides a Web Method called Add, which takes in two integers as parameters and returns their sum as integer. The following code is typed in a simple text editor.
<%@WebService Language="c#" class ="sumThis"%>using System;using System.Web.Services; public class sumThis:WebService{[WebMethod]public int Add(int a,int b){int sum;sum=a+b;return sum;}}
Note: MSDN documents imply that although language interoperability is supported within the .NET platform, it is advisable to use attributes like the ones described below to ensure type compliancy detection at compile time. In other words, these attributes help identify the types that are not recognized by the CTS at compile time. These attributes need to be included in the program prior to declaration of the class in the program.
//optional attributes to ensure type compliancy// Assembly marked as compliant.[assembly: CLSCompliantAttribute(true)]// Class marked as compliant.[CLSCompliantAttribute(true)]
Few pointers that one must pay attention to while developing a web service are:
View of sumThis webmethod in the browser
If we were to use the webservice in a client's machine, we need to create a proxy class or more commonly called stub file. The proxy class contains the details of the web method, the server it is contained in etc. This proxy class can be generated on the client's machine by using the utility WSDL.exe which is packaged along with .NET SDK (Beta 2).
Consuming a Web service in a Client's machine using VB.NET One of the highlights of this utility is one can generate the stub file by just specifying the ip address or the hostname that contains the web service. At the command prompt, the stub file can be generated as
WSDL http://[hostname of server] or [ip address of server] [/n:[namespace to be used in client's machine]] [/l: [language of proxy class=VB/C#]] /out:[output file name[.cs/.vb]]
My compilation string for the webmethod sumThis was
wsdl /n:myNamespace /l:vb /out:vbAddWebservice.vb
There are other switches in the command which can be explored by issuing
WSDL /? at command prompt. We can inspect the output file for more details on the WebMethod.
Once the output file is obtained, we have to compile it using appropriate compiler to produce a DLL file (in my case the VB.NET's compiler).
vbc /t:library /r:System.dll,System.web.services.dll,System.xml.dll vbAddWebService.vb
Note: The VB.NET compiler needs to be given the reference of System.dll too, which is not so in the case of C# compiler.
Now we are all set to create a client's application that can access the webmethod Add defined in the webservice. In this example, the client application I conceived of is a simple console application written in VB.NET. The code for the application is as follows:
Imports System'import the namespace assigned in WSDL compilationImports myNamespace Module Module1Sub Main()'instantiating a object of the webserviceDim ob As New sumThis'Ask the user to input two value to add upConsole.WriteLine("***ENTER TWO INTEGERS TO ADD***")Console.Write("FIRST INTERGER :")'inputing the values in variablesDim var1, var2 As Integervar1 = Integer.Parse(Console.ReadLine())Console.Write("SECOND INTERGER :")var2 = Integer.Parse(Console.ReadLine())'Obtain the answer and displayConsole.WriteLine("The Sum of {0} and {1} is {2}" var1, var2,ob.Add(var1,var2))End SubEnd ModuleCompile the above application at the command prompt as
vbc /r:vbAddWebservice.dll,system.dll,system.web.services.dll,system.xml.dll testerCalc.vb
Test the application as a normal console application. And voila, we have an application that can communicate to a method from another server written in language different than that of the client's development language, albeit on the same platform.
One can similarly develop and use web services with ease in Visual Studio .NET. This example was intended to make clear that web services could be used in simple console and other command line utilities with ease as any other application.
References:
Import data from Excel to Access using ADO.net
Writing an ActiveX Control in .NET | http://www.c-sharpcorner.com/UploadFile/vijaycinnakonda/WebServicesLangInterOp11082005010152AM/WebServicesLangInterOp.aspx | CC-MAIN-2014-10 | refinedweb | 978 | 55.54 |
mangling of sys.path in setup.py makes installation of more than one lazr package fail
Bug Description
Hi,
setup.py does
sys.path.
from lazr.restfulclient import __version__
however, if there is another lazr module in sys.path this will fail
with an import error, as the lazr package can't be split between
"src" and the other location in sys.path.
This means that if you
setup.py install
lazr.uri, a dependency of lazr.restfulclient then
setup.py install
in lazr.restfulclient will fail.
This makes it very difficult to install more than one lazr module.
This is a blocker for packaging the new launchpadlib. I could hack around
it in the package, but it would be better to fix it.
Thanks,
James
Related branches
- Barry Warsaw: Approve on 2009-09-04
- Diff: None lines
Hi,
Feature freeze for Ubuntu is at the end of this month, and I would like
to get the new launchpadlib in, but this is blocking it. Given that there
are new packages to integrate it will take some time, so I'm keen to
get this solved soon. Is there any interest in fixing it in lazr itself,
or shall I just patch it?
Thanks,
James
We should put a versions.txt in the package, and have both setup.py and __init__.py read it. Then we can remove the path hack.
This affects the majority of lazr.* packages, inlcuding the template (lazr.yourpakg).
Leonard Richardson wrote:
>.
That's good thanks, as I am not packaging lazr.restful as this stage, so
I don't need to package the other lazr.* packages it depends on.
Thanks for your help.
James
I've implemented the fix for lazr.uri, lazr.yourpkg, and lazr.enum. lazr.enum still needs to land and then I need to do a release of it; I'll do the release tomorrow. This should be enough for now.
Version 1.2 of lazr.smtptest on pypi still seems to be missing this fix, is there anybody out there feeling motivated to push a new version of it to pypi? Thanks, Anna
Any comment on this? I'd rather not implement a hack in the packaging.
Thanks,
James | https://bugs.launchpad.net/lazr.lifecycle/+bug/400170 | CC-MAIN-2015-32 | refinedweb | 369 | 79.36 |
1 package org.apache.turbine.util.pool.services.pool.TurbinePool;20 21 /**22 * A support class for recyclable objects implementing default methods.23 *24 * @author <a HREF="mailto:ilkka.priha@simsoft.fi">Ilkka Priha</a>25 * @version $Id: RecyclableSupport.java,v 1.3.2.2 2004/05/20 03:25:50 seade Exp $26 */27 public class RecyclableSupport implements Recyclable28 {29 /**30 * The disposed flag.31 */32 private boolean disposed;33 34 /**35 * Constructs a new recyclable support and calls the default recycle method.36 */37 public void Recyclable()38 {39 recycle();40 }41 42 /**43 * Recycles the object by removing its disposed flag.44 */45 public void recycle()46 {47 disposed = false;48 }49 50 /**51 * Disposes the object by setting its disposed flag.52 */53 public void dispose()54 {55 disposed = true;56 }57 58 /**59 * Checks whether the object is disposed.60 *61 * @return true, if the object is disposed.62 */63 public boolean isDisposed()64 {65 return disposed;66 }67 68 /**69 * A convenience method allowing a clever recyclable object70 * to put itself into a pool for recycling.71 *72 * @return true, if disposal was accepted by the pool.73 */74 protected boolean doDispose()75 {76 try77 {78 return TurbinePool.putInstance(this);79 }80 catch (RuntimeException x)81 {82 return false;83 }84 }85 }86
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/turbine/util/pool/RecyclableSupport.java.htm | CC-MAIN-2016-50 | refinedweb | 234 | 50.53 |
Introduction to Storybook
An open source tool for developing UI components in isolation for React, Vue, Angular, and more
Update 2018: I cowrote LearnStorybook.com, a free 9 chapter tutorial on getting started with Storybook. Read the announcement »
Storybook is a Component Explorer — a tool for working on a single component in isolation — built for React and React Native. As of this article, it’s likely the most popular and fullest featured component explorer out there.
The team here at Chroma, along with others at Airbnb, Slack, and Coursera, rely on Storybook to build cutting edge user interfaces (UIs).
Why should you be interested? Apart from all the benefits of component explorers including developing one component at a time, Storybook has some stand out features that deserve a closer look. you write a story, which is Storybook’s API for describing states in terms of a rendered React element — i.e. a component and set of props:
storiesOf('Task') .add('inbox task', () => ( <Task task={{ title: "Test Task", subtitle: "on TestBoard", state: "TASK_INBOX", }} /> ));
Once you’ve written a story, it’s simply a matter of browsing to the story inside the storybook UI to see that component rendered with props you’ve supplied it.
Decorators
Although it is desirable to write components that render independently of the environment they are use in, often a component does rely on certain global things. Typical examples are:
- The component’s CSS is scoped assuming it is rendered inside a given (set of) CSS classes or HTML tags. e.g.,
.task-wrapper > .task { \\ }
- The component assumes things about its ancestors’ styles (for instance a
background-color, or a certain inherited
fonttag).
- The component expects certain information from the React context (usually supplied by ancestor components that follow the provider pattern: think Redux Providers, React Routers, Apollo Providers).
When writing the stories for a given component, you can attach a decorator which simply wraps the component in some extra React code. This can be as simple as wrapping in a
div with a certain class or style:
storiesOf('Task') .add('inbox task', () => ( <div className="wrapper"> <Task {...} /> </div> ));
Or as complex as a mocked provider:
import { MemoryRouter } from 'react-router'; storiesOf('Task') .add('inbox task', () => ( <MemoryRouter initialEntries={['/some/path']}> <Task {...} /> </MemoryRouter> ));
If you want to use the same decorator for every story for a given component, use
.addDecorator:
storiesOf('Task') .addDecorator(story => ( <div className="wrapper"> {story()} </div> )) .add('inbox task', () => ( <Task {...} /> ));
If you want to use the same decorator for every story in your app, import
addDecorator from Storybook:
import { addDecorator } from '@storybook/react'; addDecorator(story => ( <div id="app-root"> {story()} </div> ));
Addons
Besides the basic API we’ve seen to write a story, Storybook features a plugin system which allows powerful add-ons to both change the behaviour of stories and add custom panes in the development environment .
These powerful add-ons help you customize Storybook to:
- Create a living styleguide by further documenting your components with the info, README, and notes add-ons.
- Drive a test suite by adding assertions or run visual tests with the specs and storyshots add-ons.
- Expand your stories to see them in various combinations and backgrounds with knobs, host, props combinations, and more.
Actions
Many components take some callbacks that are provided by their parent and are expected to be triggered when an action happens on that component. Storybook has an interesting add-on which allows you to pass an “action” into the story and when the callback fires, a message is logged in the interface:
import { action } from '@storybook/addon-actions'; storiesOf('Task') .add('inbox task', () => ( <Task task={{ {...} onArchive={action('onArchive')} onSnooze={action('onSnooze')} }} /> ));
Getting started
The easiest way to get started with Storybook is to use the
getstorybook tool, a CLI that scans your app and will make the (small) changes required to get storybook working. You can use it like so:
npm install --global @storybook/cli cd [your-app] getstorybook
Requiring stories
getstorybook will add a folder to your app called
.storybook/ which includes a file
config.js. This file is the “entrypoint” for your storybook and from here you need to require each file that contains a story for any component. The default is simply to start at a file named
stories/index.js, although you can customize this.
Providing global styles
If your app uses global styles that are required from an
index.js (or even directly embedded in the
<head> tag) that are required to render a component correctly, you’ll need to
require() those styles from the
.storybook/config.js file.
import { configure } from '@storybook/react'; // By importing your application's CSS here, we ensure it's included // for each story import '../index.css'; function loadStories() { require('../stories/index.js'); } configure(loadStories, module);
Storybook webpack configuration
Storybook ships with its own webpack configuration which closely mirrors that of create-react-app. This means that when you run storybook and it requires your story files, it will only work if those files themselves only
require() things that that configuration can understand.
In particular, the configuration does not include any CSS preprocessors. If you have a more complicated webpack config that your components rely on, you can provide a
.storybook/webpack.config.js and tweak storybook’s config to mirror your app’s:
const path = require('path'); module.exports = { module: { loaders: [ { test: /\.scss$/, loaders: ["style", "css", "sass"], include: path.resolve(__dirname, '../') } ] } };
You can do much more complex things to the webpack config: read more about it.
Getting involved
Storybook was originally developed by Kadira, a company spearheaded by Arunoda Susiripala. The project was recently handed off to community maintainers (I’m one of them!). We are always looking for help in making the project better. If you want to get your hands dirty here are a few simple things you can do:
- Report issues: Use the tool and report issues or ideas at the bug tracker.
- Issue triage: It’s always useful to have more eyes on issues, helping with reproductions and triaging them. Read the guidelines, and chip in to help out.
- Write code: If you are interested in writing code to help, then the community guidelines are a great place to start. Join the Storybook Slack channel to ask questions and seek guidance on ideas.
The Future
Storybook is a young project that’s picking up momentum. Version 3 was recently released, with full support for Webpack 2. The project has an ambitious roadmap that is being planned right now.
In the meantime Storybook is an excellent tool for Component-Driven Development, and an invaluable tool for developing future frontends. | https://www.chromatic.com/blog/introduction-to-storybook/ | CC-MAIN-2020-45 | refinedweb | 1,098 | 53.92 |
I want to get into physics with CocosSharp. I was about to use CCPhysicsBody in my PCL project, however it's not available in the namespace "CocosSharp".
Checking my NuGet packages (CocosSharp.PCL.Shared), the last package publish date is two years old. Do I miss CCPhysicsBody, because my packages are too old? Why there is no up2date package then?
Answers
Hi, although I'm not 100% of this, but the issue might be that the dll that you use doesn't contain the CCPhysicsBody - simply because it wasn't compiled into it. Why? Because if you take a look at the CCPhysicsBody.cs file in the CocosSharp GitHub repository (I can't give you the link, because being new here, the forum rules don't let me post links yet) you can see, that there's a preprocessor directive in this file. So basically if you compile the code without specifying this USE_PHYSICS directive, it won't be in the end result.
Maybe you can compile it for yourself? I dunno
Thanks for this information. I wasn't able to find any information about this on the internet. Why not simply providing a pre-release package of CocosSharp that includes all the experimental features like physics? I love using NuGet packages for its simplicity ... | https://forums.xamarin.com/discussion/comment/308759 | CC-MAIN-2019-04 | refinedweb | 215 | 66.23 |
#include "Object.h" #include "SDL.h" #include <string> using namespace std; SDL_Event event; Uint8 * keystates = SDL_GetKeyState(NULL); int main(int argc, char *args[] ){ bool quit = false; Object play1; while( quit != true ){ if ( keystates[SDLK_LEFT] == SDL_PRESSED) { play1.xvelocity = 5; } else if ( keystates[SDLK_LEFT] == SDL_RELEASED) { play1.xvelocity = 0; } play1.draw_char(); } return 0; }
When i press key down the player moves for one frame and then stops, and i have to continually keep pressing the key.
I have tried many diff options including the standard SDL_Event. nothing seems to work, when i have a key up event. I can change states fine when there all just key down. PLEASE HELP | http://www.dreamincode.net/forums/topic/313805-sdl-getkeystates-not-working-correctly/ | CC-MAIN-2017-34 | refinedweb | 107 | 85.79 |
We're Back! Introducing LadyDi!
[This is the first part of a Two-Part introduction : Feature Generation and Feature Selection]
So if you are followers of our blog you will notice that we disappeared for a little over 3 months. Our full-time jobs and contributions to open-source projects take precedence over writing. Not enough hours in a day. Sad, but true.
This re-emergence comes with the release of our latest open-source library: LadyDi. As some of you may know, I name all projects I lead after powerful women in history. This one, as the name suggests, is named after Diana, Princess of Wales. She was more than a pretty face; she was an icon and had the entire British Monarchy under finger (one could argue that she planted the seeds for their continued relevancy today).
Like it's namesake, LadyDi, aims to think outside the box and reinvent parts of a system that is very hard to keep in touch with -- in this case Feature Encoding, Transformation, Selection in Apache Spark.
Apache Spark has been going way too fast without giving itself time to mature. Each version introduces new bugs - the fixing of which creates new bugs. A potential read on this could be that all of this is intentional: the harder it is to use and maintain, the more incentive there is to hire a 3rd-party providers like Databricks (whose CEO is the founder of the Apache Spark project). LadyDi will not solve all your problems but it will help with the pain that is Feature Encoding and Selection using Apache Spark's transformers. It leaves your code cleaner and eliminates boilerplate you inevitably accrue if you want to use a variation of Spark Transformers.
The fundamental issue arises from an inconsistency in the output and input requirements Apache Spark's different transformers. This leads to boilerplate and headaches:
val tokenizerA = new Tokenizer() .setInputCol("text") .setOutputCol("textTokenRaw") val removerA = new (StopWordsRemover) .setInputCol(tokenizerA.getOutputCol) .setOutputCol("textToken") val hashingTFA = new HashingTF() .setNumFeatures(100) .setInputCol(removerA.getOutputCol) .setOutputCol("featuresRaw") val standardScaler = new StandardScaler() .setInputCol("featuresRaw") .setOutputCol("features") val pipeline = new Pipeline() .setStages(Array(tokenizerA, removerA, hashingTFA, standardScaler)) val pipelineModel = pipeline.fit(featureData) val hashedData = pipelineModel.transform(featureData) hashedData.select("x", "y", "features").as[EncodedFeatures]
It's pretty gross. Because if right after that you wanted to use another encoder... say VectorAssembler, you'd have to start from scratch!
val assembler = new VectorAssembler().setInputCols( Array("a", "b", "c")) .setOutputCol("features") val pipeline = new Pipeline() .setStages(Array(assembler)) val pipelineModel = pipeline.fit(data) data.take(10).foreach(println(_)) val hashedData = pipelineModel.transform(data) hashedData.select("label", "features")
LadyDi gets rid of this nonsense and you can just chain transformers as you please like so:
def stringFeature() = new Tokenizer() :: new StopWordsRemover() :: New HashingTF().setNumFeatures(10) :: new Normalizer() :: Nil | https://mindfulmachines.io/blog/2016/4/5/were-back-introducing-ladydi | CC-MAIN-2018-39 | refinedweb | 470 | 50.12 |
IShellView::TranslateAccelerator method
Translates keyboard shortcut (accelerator) key strokes when a namespace extension's view has the focus.
Syntax
Parameters
- lpmsg
Type: LPMSG
The address of the message to be translated.
Return value
Type: HRESULT
Returns S_OK if successful, or a COM-defined error value otherwise.
If the view returns S_OK, it indicates that the message was translated and should not be translated or dispatched by Windows Explorer.
Remarks
This method is called by Windows Explorer to let the view translate its keyboard shortcuts.
Notes to Calling Applications
Windows Explorer calls this method before any other translation if the view has the focus. If the view does not have the focus, it is called after Windows Explorer translates its own keyboard shortcuts.
Notes to Implementers
By default, the view should return S_FALSE so that Windows Explorer can either do its own keyboard shortcut translation or normal menu dispatching. The view should return S_OK only if it has processed the message as the keyboard shortcut and does not want Windows Explorer to process it further.
Requirements
See also | https://msdn.microsoft.com/en-us/library/windows/desktop/bb774842(v=vs.85).aspx | CC-MAIN-2015-18 | refinedweb | 177 | 53.81 |
J 8.0.
Introduces what is new in JScript 8.0.
Provides a collection of links to topics that explain how to write, edit, and debug code with JScript 8.0.
Includes a list of links to topics that explain how to display information from a command program, from ASP.NET, and in a browser.
Comprises a guide to the elements and procedures that encompass Regular Expressions in JScript 8.0. Topics explain the concept of Regular Expressions, proper syntax, and appropriate use.
Lists elements that comprise JScript Language Reference and links to topics that explain the details behind the proper use of language elements.
Lists language reference topics that explain how to launch Visual Studio and how to build from the command prompt.
Compares keywords, data types, operators, and programmable objects (controls) for Visual Basic, C++, C#, JScript, and Visual FoxPro.
Contains links to topics that explain the namespaces in the .NET Framework class library.
Lists language reference topics that explain how to use commands to interact with the IDE from the Command Window and Find/Command box.
Provides links to topics that discuss the steps involved in the development of specific applications or how to use major application features. | http://msdn.microsoft.com/en-us/library/72bd815a.aspx | crawl-002 | refinedweb | 201 | 56.25 |
Date: Sun, 7 Aug 2022 22:29:13 +0000 (UTC) Message-ID: <1271422363.58165.1659911353364@ip-10-0-0-233.us-west-2.compute.internal> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_58164_1241363467.1659911353354" ------=_Part_58164_1241363467.1659911353354 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
PxrAttribute allows the user to read attributes attached to (sto= red) on a node. Such an example would be to add a color attribute to a set = of objects to be read by a material later. In this way, a material can chan= ge its result based on the object being rendered instead of a different mat= erial. Below there are color attributes attached to the sphere's of the sha= der ball. A single PxrSurface mater= ial renders with a different diffuse color as specific by each shape's defi= ned attribute. Examples on usage are below.
There may be a performance penalty for using this node in many places in= your scene. Efficiency is key to avoid too many evaluations of user attrib= utes if not necessary.
This field takes a string that identifies the attribute. The strin= g should include the namespace for the attribute and the attribute name sep= arated by a colon. For example, trace:maxdiffusedepth or user:Ball.= p>
This specifies the type of variable to read and must match what was spec= ified above on the other nodes.
A float result.
The color result.
DCC applications may use a different mechanism for applying a user attri= bute. Below are two examples for applying a color user attribute named "Bal= l" to a shape:
Add a Pre Shape MEL attribute to the shape using the Attibutes > Rend= erMan menu
RiAttribute "user" "color Ball" 1 0.2 0.65
The below is an OpScript example of the same attribute in Katana
gb =3D GroupBuilder()
gb:set("value", FloatAttribute({1.0, 0.2, 0.65}= , 3))
gb:set("type", StringAttribute("color"))
Interface.SetAttr("prm= anStatements.attributes.user.Ball", gb:build())
See Using PxrMatteID as an= example of how to add user attribute in Houdini. | https://rmanwiki.pixar.com/exportword?pageId=11469091 | CC-MAIN-2022-33 | refinedweb | 354 | 57.47 |
Load image¶
Image processing often works on gray scale images that were stored as PNG files. How do we import / export that file into {{{python}}}?
- Here is a recipy to do this with Matplotlib using the {{{imread}}} function (your image is called {{{lena.png}}}).
from pylab import imread, imshow, gray, mean a = imread('lena.png') # generates a RGB image, so do aa=mean(a,2) # to get a 2-D array imshow(aa) gray()
This permits to do some processing for further exporting such as for [:Cookbook/Matplotlib/converting_a_matrix_to_a_raster_image
import Image mode = 'L' size= (256, 256) imNew=Image.new(mode , size) mat = numpy.random.uniform(size = size) data = numpy.ravel(mat) data = numpy.floor(data * 256)
imNew.putdata(data) imNew.save("rand.png") </code>
- this kind of functions live also under {{{scipy.misc}}}, see for instance {{{scipy.misc.imsave}}} to create a color image:
from scipy.misc import imsave import numpy a = numpy.zeros((4,4,3)) a[0,0,:] = [128, 0 , 255] imsave('graybackground_with_a_greyish_blue_square_on_top.png',a)
- to define the range, use:
from scipy.misc import toimage import numpy a = numpy.random.rand(25,50) #between 0. and 1. toimage(a, cmin=0., cmax=2.).save('low_contrast_snow.png')(adapted from )
- there was another (more direct) method suggested by
Section author: LaurentPerrinet | http://scipy-cookbook.readthedocs.io/items/Matplotlib_LoadImage.html | CC-MAIN-2017-39 | refinedweb | 211 | 53.98 |
Using a PWM Device in Zephyr
Using devices in Zephyr is tricky because there are so many options and settings at first that it’s just not clear from documentation and samples how to do even easy things.
My Simple Example
I have a board based on the Nordic Nrf52840 that uses one pin (Pin 33) to play simple tones on a speaker. I’d like to use the PWM facility to drive the speaker as easily as possible.
There are a few Zephyr samples that use PWM. The simplest is samples/basic/blink_led. There aren’t any usable comments in the sample but it does work.
The Device Tree
We start by looking at the existing device tree that we’ll use/override.
Parents
The nrf52840 is an arm-based board so inside of the {zephyr}/boards/arm folder contains the folder for my board — for example the folder nrf52840dk_nrf52840 for the dev kit. Inside of that folder, the myboard.dts file has this line to include the base CPU definition:
#include <nordic/nrf52840_qiaa.dtsi>
The file dts/arm/nordic/nrf52840_qiaa.dts defines the qiaa rev of the nrf52840 chip (the one with 1MB of flash and 256KB of ram). That file includes the base nordic devicetree file at dts\arm\nordic\nrf52840.dtsi. Looking at that tree for pwm we find:
pwm0: pwm@4001c000 {
compatible = "nordic,nrf-pwm";
reg = <0x4001c000 0x1000>;
interrupts = <28 1>;
status = "disabled";
label = "PWM_0";
#pwm-cells = <1>;
};pwm1: pwm@40021000 {
compatible = "nordic,nrf-pwm";
reg = <0x40021000 0x1000>;
interrupts = <33 1>;
status = "disabled";
label = "PWM_1";
#pwm-cells = <1>;
};
... (pwm2 and pwm3 entries as well)
The nrf52840 has 4 pwm modules with 4 channels each. These entries partially define devices pwm0, pwm1, … as parts of the nrf52840 SOC (system on chip).
In words: pwm0 — is a nordic,nrf-pwm device, it resides in the 4K (0x1000) block at 0x4001c00, is named “PWM_0”, and uses interrupt 28. It is disabled by default. They are nice enough to show the required pwm-cells commented out.
Examine the yaml file at dts/bindings/pwm/nordic,nrf-pwm.yaml to see the meta-definition of the nrf-pwm device.
Our Defines
Now for the higher level board definition (in myboard.dtsi) add something like:
&pwm0 {
status = "okay";
ch0-pin = <33>;
ch0-inverted;
};
The & indicates that we’re adding/changing the base pwm0 definition to
- status = “okay” enables it (required)
- set the channel 0 pin to 33 (required)
- say that the channel 0 pin is inverted (optional)
We could add more channels and pins but here I’m just using one pin/channel.
Now add one line to the myboard.yaml file in the supported: section
- pwm
Now we have a pwm defined but we’re not yet easily using it in the board. To encapsulate it comfortably we want to give the pwm/channel pair a name. The simplest approach is to note that pwm-leds has a high-level pwm definition that isolates one pwm pin. So we can add to the myboard.dtsi file (in the device area at the top):
aliases {
pwmsound = &pwm_dev0;
};pwmdevs {
compatible = "pwm-leds";
pwm_dev0: pwm_dev_0 {
pwms = <&pwm0 33>;
};
};
This defines the pwmdevs device group as having one pwm-leds device named pwm_dev0 that uses pin 33 (hence channel 0). It has a usable name (alias) of pwmsound. Personally the use of the pin instead of channel here is odd.
This is a lot of abstraction to be able to refer to channel 0 of pwm0 by name pwmsound.
To ensure that the pwm code from the zephyr source is included, we have to add a line
CONFIG_PWM=y
to the prj.conf file in the project. The project would compile without this line but then it would fail to find any pwm devices.
In Source Code
To use it in code it helps to know what DEFINEs this produced. Take a look at the generated file build/zephyr/include/generated/devicetree_unfixed.h and look for pwmsound. Here’s a piece of the file ->
/*
* Devicetree node: /pwmdevs/pwm_dev_0
*
* Node identifier: DT_N_S_pwmdevs_S_pwm_dev_0
*
* Binding (compatible = pwm-leds):
* $ZEPHYR_BASE\dts\bindings\led\pwm-leds.yaml
*
* Description:
* PWM LED child node
*//* Node's dependency ordinal: */
#define DT_N_S_pwmdevs_S_pwm_dev_0_ORD 17/* Ordinals for what this node depends on directly: */
#define DT_N_S_pwmdevs_S_pwm_dev_0_REQUIRES_ORDS \
14, /* /pwmdevs */ \
16, /* /soc/pwm@4001c000 *//* Ordinals for what depends directly on this node: */
#define DT_N_S_pwmdevs_S_pwm_dev_0_SUPPORTS_ORDS /* nothing *//* Existence and alternate IDs: */
#define DT_N_S_pwmdevs_S_pwm_dev_0_EXISTS 1
#define DT_N_ALIAS_pwmsound DT_N_S_pwmdevs_S_pwm_dev_0
#define DT_N_NODELABEL_pwm_dev0 DT_N_S_pwmdevs_S_pwm_dev_0
/* Special property macros: */
#define DT_N_S_pwmdevs_S_pwm_dev_0_STATUS_okay 1/* Generic property macros: */
#define DT_N_S_pwmdevs_S_pwm_dev_0_P_pwms_IDX_0_VAL_channel 26
Finally we can use the generated defines in source code. DT_ALIAS(pwmsound) returns the very long define with pwm_dev_0 and then we can extract the label and channel
#if DT_NODE_HAS_STATUS(DT_ALIAS(pwmsound), okay)
#define PWM_DRIVER DT_PWMS_LABEL(DT_ALIAS(pwmsound))
#define PWM_CHANNEL DT_PWMS_CHANNEL(DT_ALIAS(pwmsound))
#else
#error "Choose a supported PWM driver"
#endif
which defines PWM_DRIVER ("PWM_0") and PWM_CHANNEL (at pin 33) for the rest of the code
struct device *pwm_dev;
u64_t cycles;
pwm_dev = device_get_binding(PWM_DRIVER);
if (!pwm_dev) {
printk("Cannot find %s!\n", PWM_DRIVER);
return;
} pwm_get_cycles_per_sec(pwm_dev, PWM_CHANNEL, &cycles);
....
Do I Need to Do All This?
Not really. Just define and enable at least one pwm channel (pin) - see the below dtsi code and add the aforesaid .yaml lines.
&pwm0 {
status = "okay";
ch0-pin = <33>;
};
Then refer to pwm0 and the pin more directly. Look at the devicetree_unfixed.h file to find the right defines.
Caveats
The Zephyr Pwm interface is incredibly primitive and poorly documented. There’s no on or off, just set a pwm period (cycle-time) and pulse-width(on-time). Setting both positive turns on the pwm and setting pulse-width to zero (0) turns it off in the nrf52840 pwm api.
Other soc’s are different (some don’t let you turn off the pwm). Also, the nrf52840 pwm api always declares a cycle frequency of 16MHz, but when you set the period/turn it on it sets the prescaler automatically. To change the prescaler value you have to turn the pwm off, which produces opaque — but working — code.
if (pwm_pin_set_usec(pwm_dev, PWM_CHANNEL, timeus, timeus / 2U)) {
printk("pwm pin set fails\n");
return;
}k_sleep(MSEC_PER_SEC * 1U); // play for 1 second// turn it off
if (pwm_pin_set_usec(pwm_dev, PWM_CHANNEL, timeus, 0)) {
printk("pwm off fails\n");
return;
} | https://medium.com/home-wireless/using-a-pwm-device-in-zephyr-7100d089f15c?source=collection_home---4------15----------------------- | CC-MAIN-2022-21 | refinedweb | 1,040 | 54.22 |
Dan's suggestion works a treat. I had to make my property a string but that wasn't an issue. Here's the solution:
public class TopicIndex { public string CreatorUserId { get; set; } public int ItemCategory { get; set; } } var docs = _searchClient.Search<TopicIndex>() .Filter(x => x.ItemCategory.Match(1)) .TermsFacetFor(x => x.CreatorUserId) .Take(0) .GetResult(); var topAuthors = docs.TermsFacetFor(x => x.CreatorUserId) .Terms .OrderByDescending(t => t.Count) .Select(t => new { UserId = t.Term, Count = t.Count });
I'm trying to sort some results from EPiServer Find using the count of a grouping like function. However, I can't see how I would achieve this with the Find API's. At the moment I'm having to get documents back from Find (filtered by a certain value) and then do the grouping and sorting in memory with Linq to objects. Below is a summary of my indexed type and code:
Any ideas how I could achieve this just using Find?
Thanks. | https://world.episerver.com/forum/developer-forum/EPiServer-Search/Thread-Container/2016/1/sorting-results-by-quotgroup-byquot-like-function/ | CC-MAIN-2020-16 | refinedweb | 160 | 66.13 |
Bokeh plots with Flask and AJAX
During the weekend, I discovered Bokeh, a Python visualization library for the web. The samples looked nice, so I played around a bit, mostly following the accessible Quick Start guide.
Eventually, I decided to build a small dashboard with Bokeh for an existing Flask application and wanted the plots to automatically fetch data updates using AJAX requests. This meant fiddling around a bit and reading more documentation. If you plan on doing something similar, the following write-up hopefully saves you some time and pitfalls.
What are we going to build?
With Bokeh you can build stand-alone and server-based data visualizations. The stand-alone visualizations are generated once in your Python environment and then saved to a file (or just delivered to a browser). The plots are then drawn and made interactive using BokehJS, the client-side JavaScript library, that needs to be included in your page.
If you want your visualizations to be based on large datasets, use streaming data, auto-downsamling for efficiency, and other goodness, you can use the Bokeh Server component.
For this tutorial I assume that you…
- Don’t want to run another server component (i.e. Bokeh Server)
- Have an existing Flask app that should do the processing / data delivery
- Nevertheless want to refresh your visualizations in short intervals using AJAX requests
Flask demo app
To simulate our exisiting application, we quickly install Flask into a virtual environment, and create a simple
app.py.
virtualenv venv --python=`which python3` source venv/bin/activate pip install Flask bokeh mkdir templates touch app.py export FLASK_APP=app.py export FLASK_DEBUG=1
And our
app.py contains for now:
from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'Hello World!'
Before continuing, let’s quickly check that our Flask app runs OK:
flask run * Serving Flask app "app" * Running on (Press CTRL+C to quit)
Want to run Flask with mod_wsgi? I wrote a short guide on how to do this on macOS.
A simple plot
Let’s enrich our demo app by ouputting a simple, static plot. To set things up, we first create two template files for our page skeleton.
Add a
layout.html to the templates folder and give it the following content:
<!DOCTYPE html> <html lang="en"> <head> <!-- Bokeh includes--> <link rel="stylesheet" href="" type="text/css" /> <script type="text/javascript" src=""></script> </head> <body> <div> <h1>Bokeh sample</h1> {% block body %}{% endblock %} </div> </body> </html>
Now add
dashboard.html to the templates folder:
{% extends "layout.html" %} {% block body %} {% for plot in plots %} {% for part in plot %} {{part | safe}} {% endfor %} {% endfor %} {% endblock %}
The template files should look familiar to you if you have worked with Flask/Jinja before. In a nutshell: we have our base html skeleton (
layout.html) and our
dashboard.html which extends the base. In
dashboard.html we output all parts (we see that next) of all plots that are contained in the
plots variable.
Time to start plotting: in
app.py, add the following imports:
from flask import render_template from bokeh.plotting import figure from bokeh.embed import components
Finally, add the dashboard route and our first plot function:
@app.route('/dashboard/') def show_dashboard(): plots = [] plots.append(make_plot()) return render_template('dashboard.html', plots=plots) def make_plot(): plot = figure(plot_height=300, sizing_mode='scale_width') x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] y = [2**v for v in x] plot.line(x, y, line_width=4) script, div = components(plot) return script, div
Going to, we should now see the following plot:
What’s going on?
We imported
figure and
components from Bokeh.
figure is used to create a plot (more precise: a Figure object) and takes arguments that apply to the whole plot. To keep things simple, we just gave the Figure object a height and a responsive width.
To create a line chart (or, in Bokeh terms: to add line glyphs), we then used the line method, passing in our prepared lists of x and y.
components conveniently prepares the HTML components to embed our plot into our site. We get both a script and a div tag, which we pass on to our previously defined template (see
render_template() and
dashboard.html).
Without much effort, we now have an interactive plot embedded in our web app. Cool!
There are various ways to style the plot further, control what widgets are shown, etc., but we leave it here, and focus on the next step: make it refresh automatically.
Refreshing the plot with new data
For refreshing our plot via an AJAX request we have two options: either we replace the whole data for the plot with the data returned by our web app, or we only deliver data updates and then append this data to the plot.
Since we don’t have any data store to pull new data from in this tutorial, we will continuously generate data for our exponential curve like above and append it to our plot. To keep track of where we are, we simply keep the state of x in a global variable.
Don’t define a global variable
x when you are working with your real web app 🙃. Depending on what you are doing, you may also want to keep track of different states and not just one global state until the app is restarted. If your dataset is not too large, you might as well be fine with the replace option.
In
app.py we first add a few more imports. We need
jsonify and
request from Flask to format our data output and get our url. And we need
AjaxDataSource from Bokeh, to tell the plot that we have a http data source.
from flask import Flask, render_template, jsonify, request from bokeh.models.sources import AjaxDataSource
Next we create our new plotting function in
app.py:
def make_ajax_plot(): source = AjaxDataSource(data_url=request.url_root + 'data/', polling_interval=2000, mode='append') source.data = dict(x=[], y=[]) plot = figure(plot_height=300, sizing_mode='scale_width') plot.line('x', 'y', source=source, line_width=4) script, div = components(plot) return script, div
Compared to the first
make_plot() we only have a few changes: instead of passing our data series directly to the line method, we now pass our previously generated
AjaxDataSource object as
source argument.
x and
y here refer to the “columns” in the AjaxDataSource.
AjaxDataSource takes as argument the address of our new data route, accessible via (see below), to fetch the new x,y values. We additionally set the update interval to 2 seconds, and choose the
append mode (the other option would be
replace, see above).
To generate and deliver our continuous data updates, we add the
/data/ route to our
app.py:
x = 0 @app.route('/data/', methods=['POST']) def data(): global x x += 1 y = 2**x return jsonify(x=x, y=y)
Note that we are not regenerating the whole components for every update. We only deliver json formatted data, one (x, y) pair at a time.
Finally, let’s add the second plot to our dashboard:
plots.append(make_ajax_plot())
Tadaa!
Opening our dashboard should now give us a plot, that “grows” every two seconds by appending fresh data retrieved from our Flask app:
If you are having trouble reproducing the result, check out the GitHub repo with the final state. | https://davidhamann.de/2018/02/11/integrate-bokeh-plots-in-flask-ajax/ | CC-MAIN-2018-09 | refinedweb | 1,221 | 64 |
Knowing how to reverse a Python string is basic knowledge you should have as Python developer. This can also apply to other data types.
Python does not provide a built-in function to reverse the characters in a string. To reverse a string in Python you can use the slice operator with the syntax [::-1]. You can also use the reversed() function together with the string join() function.
Let’s go through some examples that show you how to approach this in practice.
How Do You Reverse a String in Python?
I want to give you the solution to the problem first, then we will understand how this solution exactly works.
Open the Python shell and define a Python string:
>>> word = "Python"
To reverse a string in Python you can use the slice operator with the following syntax: [::-1].
>>> print(word) Python >>> print(word[::-1]) nohtyP
Fantastic! Nice and concise!
As you can see the second print statement returns the reversed string.
If you want to make this code reusable you can create a function that given a string returns the reversed string.
>>> def reverse_word(word): ... return word[::-1] ... >>> print(reverse_word("Python")) nohtyP
In the next section we will learn how the expression [::-1] works.
What is the Meaning of [::-1] in Python?
You might be wondering, what does the syntax [::-1] mean?
This is the Python slice operator that in its basic form allows to select a substring from a string.
For example, let’s say I want to select the first three characters of our word, I would use the following…
>>> print(word[0:3]) Pyt
The substring starts at the character with index 0 and ends at the character with index 3 – 1 so 2.
If we omit the first zero in the slice expression the output is the same:
>>> print(word[:3]) Pyt
But, the syntax we have used before for the slice operator is slightly different:
[::-1]
It follows extended syntax for the slice operator:
[begin:end:step]
By default the value of step is 1. Also if you don’t specify any value for begin the Python interpreter will start from the beginning of the string.
And, if you don’t specify a value for end Python will go until the end of the string.
Let’s see what happens if we don’t specify a value for begin and end and we set the step to 1:
>>> print(word) Python >>> print(word[::1]) Python
Python goes through every character of the string from the beginning to the end using a step equal to 1.
The only difference in the expression we have seen before to reverse a string is that we have used a step equal to -1.
When you specify a step equal to -1 in a slice operator the Python interpreter goes through the characters of the string backwards. This explain the output returned by the expression [::-1].
>>> print(word[::-1]) nohtyP
How Do You Reverse a String with a Python While Loop?
We can obtain the same output returned by the slice operator but using a while loop.
This is not something I would necessary use if I had to reverse a string in a real application considering that it requires a lot more lines of code compared to the slice operator.
At the same time knowing how to reverse a Python string with a while loop helps you develop the way you think when you look for a solution to a coding problem.
Here is what we want to do…
Start from the end of our string and go backwards one character at the time using a while loop.
Every character is stored in a new string that at the end will be our original string but reversed.
def reversed_word(word): reversed_word = '' index = len(word) - 1 while index >= 0: reversed_word += word[index] index -= 1 return reversed_word
As you can see we start at the index len(word) -1, basically the last character of our string. And we go backwards as long as the index is greater than or equal to zero.
Note: writing index -= 1 is the same as index = index – 1. So it’s just a more concise way to decrement the index by 1.
Let’s test this code by calling our function:
print(reversed_word("Python")) [output] nohtyP
It works fine!
What Does the reversed() Function Do?
Python also provides a built-in function called reversed().
What does it do? Can we use it to reverse the characters in a string?
According to the official Python documentation this is what the reversed() function does…
So, it returns a reverse iterator.
>>> print(reversed("Greece")) <reversed object at 0x7ff8e0172610>
Here is what we get if we cast the reverse iterator to a list.
>>> print(list(reversed("Greece"))) ['e', 'c', 'e', 'e', 'r', 'G']
One way to use the reversed() function to reverse a Python string is to also use the string join() function.
>>> print(''.join(reversed("Greece"))) eceerG
We are using an empty separator character to concatenate the characters returned by the reverse iterator.
We could also cast the iterator to a list before passing it to the join function.
>>> print(''.join(list(reversed("Greece")))) eceerG
These approaches to solve our problem are all one liners.
In one of the final sections of this tutorial we will analyse which approach is the most performant.
How to Reverse a Unicode String
Unicode is a standard used to represent any characters in any language (and not only, with Unicode you can even represent emojis).
You might want to use Unicode if you have to handle strings in languages other than English.
For example, the following word means ‘Good morning’ in Greek:
Καλημερα
Let’s print each letter by using their Unicode representation:
>>> print('\U0000039A') Κ >>> print('\U000003B1') α >>> print('\U000003BB') λ >>> print('\U000003B7') η >>> print('\U000003BC') μ >>> print('\U000003B5') ε >>> print('\U000003C1') ρ >>> print('\U000003B1') α
And now let’s print the full word still using the Unicode representation for each character:
>>>>> print(word) Καλημερα
We can reverse the string by using the first approach we have covered in this tutorial (the slice operator).
>>> print(word[::-1]) αρεμηλαΚ
Performance Comparison of Approaches to Reverse a Python String
I would like to confirm which one of the approaches to reverse the characters of a string is the fastest.
To measure the performance of each code implementation we will use the Python timeit module.
Reverse a string using the slice operator
import timeit testfunction = ''' def reversed_word(): return 'hello'[::-1] ''' print(timeit.timeit(testfunction)) [output] 0.054680042
Reverse a string using a while loop
import timeit testfunction = ''' def reversed_word(): word = 'hello' reversed_word = '' index = len(word) - 1 while index >= 0: reversed_word += word[index] index = index - 1 return reversed_word ''' print(timeit.timeit(testfunction)) [output] 0.063328583
Reverse a string using the join() and reversed() functions
import timeit testfunction = ''' def reversed_word(): word = 'hello' return ''.join(reversed(word)) ''' print(timeit.timeit(testfunction)) [output] 0.062542167
Reverse a string using the join(), list() and reversed() functions
import timeit testfunction = ''' def reversed_word(): word = 'hello' return ''.join(list(reversed(word))) ''' print(timeit.timeit(testfunction)) [output] 0.059792666999999994
The fastest implementation is the one using the slice operator.
Conclusion
In this tutorial we have seen that it’s possible to reverse the characters of a string using Python in many different ways.
The most performant implementation uses the slice operator but it’s also possible to use the reversed() function together with the join() and list() functions.
Implementing your own function using a looping construct doesn’t really make sense considering that it wouldn’t be as performant as the slice operator and it would take a lot more lines of code.
I’m a Tech Lead, Software Engineer and Programming Coach. I want to help you in your journey to become a Super Developer! | https://codefather.tech/blog/reverse-python-string/ | CC-MAIN-2021-31 | refinedweb | 1,303 | 61.26 |
C# Programming/The .NET Framework/Windows Forms< C Sharp Programming | The .NET Framework
Contents
System.Windows.FormsEditPageor
TabControl) that ultimately resides on the
Form. When automatically created in Visual Studio, it is usually subclassed as
Form1.
- Button - a clickable button
- TextBox - a singleline or multiline textbox that can be used for displaying or inputting text
- RichTextBox - an extended
TextBoxthat can display styled text, e.g. with parts of the text colored or with a specified font. RichTextBox can also display generalized RTF document, including embedded images.
- Label - simple control allowing display of a single line of unstyled text, often used for various captions and titles
- ListBox - control displaying multiple items (lines of text) with ability to select an item and to scroll through it
- ComboBox - similar to
ListBox, but resembling a dropdown menu
- TabControl and TabPage - used to group controls in a tabbed interface (much like tabbed interface in Visual Studio or Mozilla Firefox). A
TabControlcontains a collection of
TabPageobjects.
- DataGrid - data grid/table view
Form classEdit
The
Form class (System.Windows.Forms.Form) is a particularly important part of that namespace because the.
EventsEdit
An event is an action being taken by the program when a user or the computer makes an action (for example, a button is clicked, a mouse rolls over an image, etc.). An event handler is an object that determines what action should be taken when an event is triggered.
using System.Windows.Forms; using System.Drawing; public class ExampleForm : Form // inherits from System.Windows.Forms.Form { public ExampleForm() { this.Text = "I Love Wikibooks"; // specify title of the form this.Width = 300; // width of the window in pixels this.Height = 300; // height in pixels Button HelloButton = new Button(); HelloButton.Location = new Point(20, 20); // the location of button in pixels HelloButton.Size = new Size(100, 30); // the size of button in pixels HelloButton.Text = "Click me!"; // the text of button // When click in the button, this event fire HelloButton.Click += new System.EventHandler(WhenHelloButtonClick); this.Controls.Add(HelloButton); } void WhenHelloButtonClick(object sender, System.EventArgs e) { MessageBox.Show("You clicked! Press OK to exit of this message"); } public static void Main() { Application.Run(new ExampleForm()); // display the form } }
ControlsEdit
The Windows Forms namespace has a lot of very interesting classes. One of the simplest and important is the
Form class. A form is the key building block of any Windows application. It provides the visual frame that holds buttons, menus, icons and title bars together. Forms can be modal and modalless, owners and owned, parents and children. While forms could be created with a notepad, using a form editor like VS.NET, C# Builder or Sharp Develop makes development much faster. In this lesson, we will not be using an IDE. Instead, save the code below into a text file and compile with command line compiler.
using System.Windows.Forms; using System.Drawing; public class ExampleForm : Form // inherits from System.Windows.Forms.Form { public ExampleForm() { this.Text = "I Love Wikibooks"; // specify title of the form this.BackColor = Color.White; this.Width = 300; // width of the window in pixels this.Height = 300; // height in pixels // A Label Label TextLabel = new Label(); TextLabel.Text = "One Label here!"; TextLabel.Location = new Point(20, 20); TextLabel.Size = new Size(150, 30); TextLabel.Font = new Font("Arial", 12); // See! we can modify the font of text this.Controls.Add(TextLabel); // adding the control to the form // A input text field TextBox Box = new TextBox(); // inherits from Control Box.Location = new Point(20, 60); // then, it have Size and Location properties Box.Size = new Size(100, 30); this.Controls.Add(Box); // all class that inherit from Control can be added in a form } public static void Main() { Application.EnableVisualStyles(); Application.Run(new ExampleForm()); // display the form } } | https://en.m.wikibooks.org/wiki/C_Sharp_Programming/The_.NET_Framework/Windows_Forms | CC-MAIN-2016-18 | refinedweb | 624 | 52.15 |
Teemo
Teemo is a dart library which provides an intuitive interface to the League of Legends LCU API
Installing Teemo
Teemo is hosted on pub.dev, which means you can get it with Flutter's built in
pub add command.
$ flutter pub add teemo
This will add a line to your pubspec.yaml:
dependencies: teemo: ^0.2.4
Note: 0.2.4 was the current version at time of writing. It will show up in your pubspec as whatever is the most current version.
If you do not see this in your pubspec.yaml, run:
$ flutter pub get
Congratulations, you can now import Teemo into your Dart code with:
import 'package:teemo/teemo.dart';
Using Teemo
Method documentation to help you with your development can be found here.
Since Teemo is asynchronous, you will need to use it inside asynchronous widgets like
FutureBuilder. An example is below, and a working code sample can be found at example/lib/main.dart.
FutureBuilder<Teemo>( future: _teemo, // a previously-obtained Future<String> or null builder: (BuildContext context, AsyncSnapshot<Teemo> snapshot) { List<Widget> children; if (snapshot.hasData) { children = <Widget>[ OutlinedButton( onPressed: () { snapshot.data?.setCurrentRunePage( Rune.Conqueror, Rune.Triumph, Rune.LegendTenacity, Rune.LastStand, Rune.Transcendence, Rune.GatheringStorm, Rune.AdaptiveForcePerk, Rune.AdaptiveForcePerk, Rune.HealthPerk, name: 'Riven'); }, child: Text("Send Me To A Custom Lobby"), ) ]; } else { children = const <Widget>[ SizedBox( width: 60, height: 60, child: CircularProgressIndicator(), ), Padding( padding: EdgeInsets.only(top: 16), child: Text('Awaiting result...'), ) ]; } return Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: children, ), ); }, )
Understanding Events and Subscriptions
Q: What are events?
A:
Events are server side concepts. There are a many events you can subscribe to.
They are names for many different api updates that fall under their umbrella.
Use -
request('get', '/help') - to get a JSON blob with a list of all possible events to subscribe to.
When events are triggered, they send the new data to Teemo.
Each update is of an endpoint underneath Event umbrella.
TL;DR
Events are names for groups of endpoints.
when an endpoint in an event changes, you get sent the new data for that endpoint.
Q: How do I use events?
A:
default_handler argument
runs every time a message is received and not otherwise handled. You don't need to
supply a
default_handler. If you don't the automatic behavior is to log it as info.
Q: How do I interact with messages coming from events?
A:
Messages come from endpoints. To catch an endpoint for special processing, use subscription_filter_endpoint from Teemo, or filter_endpoint from the subscription. these methods make the 'handler' argument run instead of the event's default handler.
REMEMBER TO MAKE EVENT HANDLER METHODS ASYNC
Using the Websocket
Using the websocket is as simple as subscribing to an event and possibly filtering endpoints. Due to the callback nature of this architecture, all websocket actions are automatically called when the LCU sends a message that should be handled by a subscription or subscription filter.
Making RESTful requests
Making rest requests is as simple as:
Teemo teemo = await Teemo.create(); await teemo.request('POST', '/lol-lobby/v2/lobby', body: { "customGameLobby": { "configuration": { "gameMode": "PRACTICETOOL", "gameMutator": "", "gameServerRegion": "", "mapId": 11, "mutators": {"id": 1}, "spectatorPolicy": "AllAllowed", "teamSize": 5 }, "lobbyName":"Name", "lobbyPassword":null }, "isCustom":true });
This code snippet will send the user to a custom practice tool lobby.
Teemo isn't endorsed by Riot Games and doesn't reflect the views or opinions of Riot Games or anyone officially involved in producing or managing Riot Games properties. Riot Games, and all associated properties are trademarks or registered trademarks of Riot Games, Inc. | https://pub.dev/documentation/teemo/latest/index.html | CC-MAIN-2022-33 | refinedweb | 591 | 58.18 |
WZL x GCX x IOTA — Status Report 02
Data(-base) specification and data preparation
Co-Authors: Semjon Becker and Felix Mönckemeyer
Previous Stories
- Part 0: Overview of the Industrial IOTA Lab Aachen
- Part 1: WZL x GCX x IOTA — Status Report 01: About data acquisition and first transactions
Retrospective
This PoC is about an industrial fineblanking machine type XFT 2500 speed from Feintool AG. It ist used for the mass production of safety-critical components, such as brake carrier plates or belt straps. In the first article we achieved to extract selected data from the machine, converted it from analog to digital, stored it in a .json-file and uploaded it into the official testnet. However, we did not optimize any of those steps, we just wanted to go the complete mile. The following video shows you the machine and the process, just in case you missed it in the Status Report 01.
Focus of Status Report 02
This article shows how we prepared the machine data, how we decided to attach the data package to the tangle and which technology stack we chose. Check Status Report 01 to see the PoC’s architecture.
Data package specification
For this PoC we have only selected a set of data which we can quickly measure. This has enabled us to work through all task packages relatively efficiently. For a productive scenario, it is certainly necessary to question which data is needed and which data is to be used. With the completion of this PoC, however, this is only a development decision and no longer a technological hurdle.
Data set
Nevertheless, we have tried to choose practical data. Our data package thus contains measured process data as well as metadata. The metadata includes a unique ID (type: integer) which is determined by digital image processing from the component surface and a generated hash (work in progress, not yet stable) and the material name (type: string) according to the international standard of the material used. Other metadata such as the name of the product, the name of the machine operator, the name of the manufacturer or the customer, etc. are conceivable at this point. In addition, real machine data has been measured. This includes the maximum punch force (type: float, unit: kN) of the ram, which is the reaction force resulting from the cutting contour and the material, and the punch stroke (type: float, unit: mm). Since the material properties and sheet thickness are not constant from part to part due to material fluctuations, the punch force and punch stroke differ each time. The die roll (type: float, unit: mm) is currently estimated on the basis of existing analytical models, as we still have to work on the measurement technology. The timestamp (type: unsigned integer) is calculated using the UNIX timestamp and describes the time of production, not the time of upload.
Exemplary raw data set file
The final data package looks like the following lines. It is divided into three parts: Firstly, public key and signature hash of the data package, secondly the MAM channel to the Tangle, thirdly the data itself. The file format chosen is JSON.
# Data set
[
{
"pubKey": "-----PUBLIC KEY-----",
"sign": "3fe...930"
},
{
"mamAddr": "9QJ...TMX",
"sidekey": "BTE...FEP",
"root": "ABC...9QW"
},
{
"id": "1c06b4ab6c7d3cdff34a2960",
"material": "X210CrW12",
"punch_force": 2492.5676,
"punch_stroke": 15.2656,
"die_roll": 2.5865,
"timestamp": 1407390451216
}
]
The machine to MAM channel mapping takes place in our backend. This decouples the machine from IOTA as carrier medium. As such the backend contains a mapping of machine identification to the corresponding MAM channel address. For each machine identification only one MAM channel exists.
Data preparation
Data preparation is mainly about data signing. We do not want to store the data itself on the Tangle, neither should it be freely accessible for anyone. Thus, it needs to be signed. As hashing algorithm we chose SHA-2-256, because we want to hash huge amounts of data in an efficent way and SHA-1 has proven weaknesses. Additionally, we decided to go with RSA and PKCS 1.5 respectively as public-key encryption technology because of the huge support and easy-to-use online validators of this signature.
# Sign Data
def sign_data (self,
PrivPublKeys = "YourKeys.json",
DataToBeHashed = "YourData.json",
DataPackageHashed = "YourHashedData.json"
)
The signed data provides a signature string that can be later verified with the real data and the on-Tangle stored signature. Hence we are able to identify corrupted or tempered data.
How to store our data packages
We decided to use AWS due to the fact that there are already existing, open source implementations of PoW for AWS Lambda which in turn simplifies the realization of this PoC. Our data is stored in a DynamoDB to deal with the huge amount of data for every workpiece in upcoming modifications of our PoC. In the future, every workpiece should generate only one MAM transaction on-Tangle and the data itself will be stored off-Tangle.
Stages of PoW
There are three possible solutions for PoW. We could perform the PoW directly on the machine (with an FPGA from Thomas Pototschnig (Microengineer)), we could use an EDGE server (on premise) located in the shop floor or we could use a cloud service. For the PoC we used the simple solution and calculated the PoW in AWS. In future enhancement we will move the calculation towards the machine, as an essential part of the tool. This will help to create a trustless environment without any third-party service that have the possibility to tamper the data.
Using AWS Lambda to do the PoW
We modified the IOTA API to use
curl.lib.js and therefore generate the PoW in one AWS Lambda function. Currently we are using DynamoDB, because it is easy to use and does the job at this point of the PoC. However, in order to parallelize and speed up PoW and to not loose the manufacturing order/history, we will be also using Amazon SQS streams in combination with the DynamoDB that can handle concurrent computing of PoW in an upcoming version. Out of the Amazon SQS stream any amount of AWS Lambda can read in parallel and do the heavy workload. This should push our transactions to the desired 2-4 transactions per second (TPS). However it should be possible to increase the amount of TPS to around 20 because the main bottleneck is the synchronously running AWS Lambda. This should enable us to handle even faster running machines and therefore be future-proof.
Attaching workpiece data to the tangle
Attaching the signature to the Tangle is pretty straight forward. The data stream will be read and then the index will be extracted. The index is used to push the data to the tangle. Afterwards the rest is handled by the MAM Library.
// Get the Data out of the SQS
event.Records.forEach((record) => {
data = JSON.parse(event.Records[0].messages[0].body)
})
// Get the index in which position of the MAM stream it should be
// placed
let index = parseInt(data.indx)
// Send the transaction via the MAM library and with local PoW
const root = await client.send(JSON.stringify(data), index - 1)
Coming up
The next step is to provide a web-based frontend that communicates with the tangle/cloud and allows our clients to decrypt the data.
Acknowledgement
I would like to thank everyone involved for their support. Especially the team from grandcentrix GmbH: Sascha Wolf (Product Owner), Christoph Herbert (Scrum Master), Thomas Furman (Backend Developer), and all gcx-reviewers and gcx-advisers; some testnet-nodes-operators, who were intensively used for above transactions: iotaledger.net; the team from WZL: Julian Bauer (Service Innovator), Semjon Becker (Design Engineer and Product Developer), Dimitrios Begnis (Frontend Developer), Henric Breuer (Machine Learning Engineer, Full-Stack Developer), Niklas Dahl (Frontend Developer), Björn Fink (Supply Chain Engineer), Muzaffer Hizel (Supply Chain Engineer and Business Model Innovator), Sascha Kamps (Data Engineer, Data Acquisition in Data Engineering and Systems Engineering), Maren Kranenberg (Cognitive Scientist), Felix Mönckemeyer (Backend Developer), Philipp Niemietz (PhD student, Computer Scientist), David Outsandji (Student assistant), Tobias Springer (Frontend Developer), Joachim Stanke (PhD student, Full-Stack Developer), Timo Thun (Backend Developer), Justus Ungerechts (Backend Developer), and Trutz Wrobel (Backend Developer), and WZL’s IT.
You have questions or want to join/contribute in any way? Ask for Daniel or write an E-Mail | Follow me on Twitter | Or check WZL’s webpage.
| https://medium.com/industrial-iota-lab-aachen-wzl-of-rwth-aachen/wzl-x-gcx-x-iota-status-report-02-8f79357757a2 | CC-MAIN-2018-34 | refinedweb | 1,400 | 53.81 |
Forecasting the market or the outcome of a gamble is important. Deciding how much to invest or bet based on how confident you are about the prediction is similarly as important. But don’t let the pressure get to you; the Kelly criterion is here to help us make this decision.
Betting with the Kelly criterion
Imagine you are invited to place bets on an indefinite sequence of coin tosses with fair odds (2:1). Also imagine you have the opportunity to test the coin in advance, which turns out to be slightly loaded with \(P(head)=0.53\). Given this “inside information”, what strategy would you follow? Common sense tells us a couple of things:
- If the bookmaker (or bookie) is offering a payoff of 2:1 for both heads and tails, it means that he is assuming an implicit probability of 1/2 for both options. We should never bet on tails since the implicit probability is higher than the real one: \(P(tail)=0.47\).
- Given that only betting on heads makes sense, we should do it with caution and avoid betting our full bankroll, since we have a 47% probability of going bankrupt.
So, what fraction f of our wealth should we bet on each trial? Let’s do the maths.
Let \(g_t= X_{t} / X_{t-1} \) be the gain obtained after the t-th bet. A reasonable criterion would be to maximise the compound gain at the end of the sequence.
$$ G_{\infty} = \frac{X_{\infty}}{X_0} = \prod_{t=0}^{\infty} \frac{X_{t+1}}{X_t} = \prod_{t=1}^{\infty} g_t $$
Equivalently, we can take the logarithm to transform the product into a sum.
$$ \log G_{\infty} = \sum_{t=1}^{\infty} \log g_t $$
Let us assume the bet is a binary event that pays c:1. Let us also assume we are certain that the probability of winning the bet is p. If we bet a fraction f of our wealth, the expected gain is given by:
$$ E {\log g} = p \log(1+cf) + (1-p)\log(1-f) $$
where we have removed the t from \(g_t\) since this expectation is the same for all trials (probabilities and payoffs are constant along time). To maximise \(G_{\infty}\), we can maximise this expectation. The problem boils down to finding the optimal fraction \(f^*\) for all bets. To do so, we merely use pre-school maths: we search for the point where the derivative of \(\log g\) w.r.t. \(f\) is null. The result is given by the expression (this is left as an exercise):
$$f^* = \begin{cases} \frac{pc-1}{c-1} & p>1/c \\ 0 & p \leq 1/c\end{cases} \tag{1}$$
This result is easy to interpret and agrees with the previous common sense statements: if \(p < 1/c\), the true probability of winning is lower than the implicit probability, so \(f^*=0\) (the odds don’t pay off the risk). On the other hand, if \(p=1\), we are absolutely certain about winning and should therefore bet our whole stack \(f^*=1\).
In the example at hand, \(c=2\) and \(p=0.53\), so by applying Eq (1) we obtain \(f^*=0.06\). In other words, we should always bet 6% of our budget on heads no matter what, as long as the coin doesn’t change. In order to check this, let us perform a set of experiments where we flip the loaded coin thousands of times and bet the Kelly fraction in each trial. We include other values of \(f\) together with other fractions for comparison.
Here we can see that the Kelly fraction is indeed the one that maximises the long-term compound return.
The story behind the Kelly criterion
In 1948, the American mathematician Claude Shannon published A Mathematical Theory of Communication: one of the most influential papers of the 20th century. Eight years later, John Kelly, who was Shannon’s colleague at Bell Labs, wrote A New Interpretation of Information Rate: the paper that introduced the Kelly criterion and gave Shannon’s information theory another meaning from the perspective of gambling. After moving to MIT, Shannon met Edward Thorp, to whom she introduced the Kelly criterion. The excellent book, “Fortune’s Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street” by W. Poundstone tells the story of Shannon and Thorp’s trip to Las Vegas, where they tested a winning method for Blackjack and measured the bias of roulette with a hidden, portable computer (in the sixties!). Soon after that, Thorp moved focused his interests in the stock market and became one of the most successful hedge fund managers ever (and perhaps the first quant to deserve the name). In his 1998 paper “The Kelly Criterion in Blackjack, Sports Betting, and the Stock Market“, he wrote:
It is now May, 1998, twenty eight and a half years since the investment program began. The partnership and its continuations have compounded at approximately 20% annually with a standard deviation of about 6% and approximately zero correlation with the market. Ten thousand dollars, tax-exempt, would now be worth 18 million dollars.
One of the reasons why Ed. Thorp had such great success is that he was using (and profiting from) the Black-Scholes equation three years before Fischer Black and Myron Scholes published it. Another reason is that he systematically applied the Kelly criterion to the stock market. He outlines some clues as to how he went about this in his paper, “The Kelly Criterion and the Stock Market”, which we summarise in the following.
The stock market
We have previously studied the case of gambling with discrete outcomes. But what about a game, such as the stock market, where the outcomes are continuous? In this case, the expectation is given by an integral instead of a summation:
$$ E\{\log g\} = \int \log(1 + fr) P(r) dr \tag{2} $$
where r is the excess return of the asset in which we’d like to invest (the return minus the Treasury bill’s or another risk-free reference). This return is distributed with \(P(r)\). Again, the optimal fraction is the one that makes the derivative null:
$$ \frac{d}{d f} E\{\log g\} = \int_{-\infty}^{+\infty} \frac{r}{1+fr} p(r) dr = 0 \tag{3}$$
Ok, I admit that it takes a bit more than pre-school maths to solve the problem. This is a good excuse to introduce some nice tools for numerical computation in Python with the Scipy package!
There are two options here:
1. Create a function that computes the integral in Eq. (2) and maximise the function w.r.t. f :
$$ f^* = \arg \max_f E\{\log g\} $$
2. Create a function that computes the derivative of the integral as in Eq. (3) and use a numerical solver to find a zero.
$$ \frac{d}{d f} E\{\log g\}_{f=f^*}=0$$
Thorp focuses on annual returns and suggests modeling \(P(r)\) as a normal distribution truncated at \(\pm 3\sigma\). The reported statistics for the 1926-1984 period are \(\mu = 0.058\) and \(\sigma = 0.216\). Here is the Python snippet that enables us to solve the problem:
from scipy.optimize import minimize_scalar, newton, minimize from scipy.integrate import quad from scipy.stats import norm def norm_integral(f,m,st): val,er = quad(lambda s: np.log(1+f*s)*norm.pdf(s,m,st),m-3*st,m+3*st) return -val def norm_dev_integral(f,m,st): val,er = quad(lambda s: (s/(1+f*s))*norm.pdf(s,m,st),m-3*st,m+3*st) return val # Reference values from Eduard Thorp's article m = .058 s = .216 # Option 1: minimize the expectation integral sol = minimize_scalar(norm_integral,args=(m,s),bounds=[0.,2.],method='bounded') print('Optimal Kelly fraction: {:.4f}'.format(sol.x)) # Option 2: take the derivative of the expectation and make it null x0 = newton(norm_dev_integral,.1,args=(m,s)) print('Optimal Kelly fraction: {:.4f}'.format(x0))
The result is \(f = 1.197\), which is slightly different from the value reported in the article (\(1.17\)). This is due to the fact that we haven’t considered the normalisation needed to account for the fact that the normal distribution has been truncated. So the results of this experiment ultimately recommend leveraged investing in the S&P 500.
Here arises an interesting question: how would the strategy (i.e. this particular fraction) have performed in the last years?
In an initial experiment, we assume that borrowing money is cheap, at the official interest rate at 1 month. The result of applying the previous statistics to the period 1993-2016 is shown in the Figure:
Note that under the unrealistic assumption that money is cheap, the higher the leverage (even above the Kelly fraction), the better. Let’s see what happens when we apply a margin of 2.5% to the official interest rate.
In this case, \( f=1.19\) obtains the best result in terms of return and risk-adjusted return. Also, due to the high cost of borrowing money, profits are not so spectacular and leveraging beyond the Kelly fraction is not such a good idea. | https://quantdare.com/kelly-criterion/ | CC-MAIN-2019-18 | refinedweb | 1,538 | 62.68 |
I've done this before, I even tore code out of a previous project that I know to work, but for some reason, it does not.. I get:
NullReferenceException: Object reference not set to an instance of an object
Regardless of what I've tried.
void Update ()
{
if (Input.GetKey (KeyCode.Mouse1) && Time.time > fireRate)
{
fireRate = Time.time + 1.5f;
Rigidbody2D bladeClone;
bladeClone = Instantiate(blade, transform.position, transform.rotation) as Rigidbody2D;
bladeClone.transform.parent = transform;
}
}
}
I have no idea why it doesn't work in one of the multiple iterations i've tried.
I've stored the transform as a variable in both the object to which the script is attached AND in much more complicated fashion; through a script on the object that is being instantiated itself, among other things.
No variation works, neither GameObject.Find, Storing the transform as a local or even global variable.
If I press Mouse1 and pause after the object appears and MANUALLY parent it to the object in question, nothing bad happens, and I get the exact result I would've expected the code to produce.
I'm at a loss. Code that used to work in previous projects has also broken for seemingly no reason.
I've been beating my head against this for hours now.
What am I missing here?
Answer by Luk3ling
·
Apr 13, 2015 at 02:03 AM
I eventually tried a new approach via tagging and searching from the instantiated object by attaching the following script to the prefab:
using UnityEngine;
using System.Collections;
public class basicBldTest : MonoBehaviour {
Transform parent;
void Start () {
parent = GameObject.FindGameObjectWithTag ("MeleePivot").transform;
this.gameObject.transform.parent = parent;
}
}
I shouldn't have to have done this, however, so the whole thing still irks Instantiate Prefab as Child of Player OnTriggerEnter
0
Answers
Make a simple tree
1
Answer
Spawn Shield to all objects
2
Answers
Instantiate as children of GameObject hit by raycast.
2
Answers
Give prefab a parent
1
Answer | https://answers.unity.com/questions/945915/instantiate-prefab-as-child-wont-work-dont-know-wh.html | CC-MAIN-2019-51 | refinedweb | 327 | 55.44 |
refer the link, it explan u and difference
Hi,
Here you can get the nice article
Although there are differences between Visual Basic .NET and Visual C# .NET, both are first-class programming languages that are based on the Microsoft .NET Framework, and they are equally powerful., the differences between Visual Basic .NET and Visual C# .NET are very small compared to what they were in earlier versions.
You have lot of articles to find out the differences b/w VB.NET & C#.NET depending on various aspects. Iam providing a few, which were discussed in different aspects.
Go through them.
some key syntactical differences between VB.NET (version 2) and C#.
Choosing between C# and VB.NET:
Complete Comparison for VB.NET and C#:
The differences between C# and VB.NET:
Hope this provides a good idea over the difference between VB.NET & C#.NET.
All the Best..!!
Rakesh Virk. view this article
Check this:
Differences Between Microsoft Visual Basic .NET and Microsoft Visual C# .NET
Don’t forget to do the download ;)
couple of links from where you can get this info,
C# and VB.NET Comparison Cheat Sheet
Hello Matli
This is just a informative Microsoft first came up with the ASP which was never been popular and have many lacks in debugging and other functionalities so they decided up to get one platform which will give support to both favourite language.VB and C#
The basic difference in both of them is they are very much stick to their originalities though being in the .net platform means VB.NET still provides lots intellisense as comapre to C# where C# provides a powerful OOPS as to VB.NET
There are a lot difference in syntax in datatypes,functions,operators,namespaces,strings,arrays,loopsOut of which I will give you now Classes and Interface diff between them
You can get all the details by refering the Link Below.
VB.NET C#.NET
Accessibility keywords PublicPrivateFriend ProtectedProtected FriendShared
' InheritanceClass FootballGame Inherits Competition ...End Class
' Interface definitionInterface IAlarmClock ...End Interface
// Extending an interface Interface IAlarmClock Inherits IClock ...End Interface
// Interface implementationClass WristWatch Implements IAlarmClock, ITimer ...End Class Please refer this link
Accessibility keywords publicprivateinternalprotectedprotected internalstatic
// Inheritanceclass FootballGame : Competition { ...}
// Interface definitioninterface IAlarmClock { ...}
// Extending an interface interface IAlarmClock : IClock { ...}
// Interface implementationclass WristWatch : IAlarmClock, ITimer { ...}
Happy Coding takecare
You Can download the white papaer here:
What are the advantages of C# over VB.NET and vice versa?:
VB.NET Advantages
C# Advantages
I just got the above from a blog and seems to be good!
VB.NET
C#
Microsoft.VisualBasic
with
Catch
When
using
For differences of Kyeword, Data types, Operator etc just go thr this link;
Also just go thr this link for more info;
Best Luck!!!!!!!!!!Sujit. | http://www.eggheadcafe.com/community/csharp/2/10038670/diffrence.aspx | crawl-003 | refinedweb | 456 | 51.85 |
Results 1 to 8 of 8
Hi all, I'm reading about linux networking architecture. Now I want to test some kernel APIs in manipulating socket buffers so I write a very small program to test the ...
Test Kernel API
I'm reading about linux networking architecture. Now I want to test some kernel APIs in manipulating socket buffers so I write a very small program
to test the functions in skbuff.h file. First, I try the alloc_skb:
Code:
#include <linux/skbuff.h> #include <socket.h> int main(){ sk_buff socketbuff; socketbuff = alloc_skb(12, GFP_DMA); return 0; }
Code:
gcc -o test testskbuff.c or gcc -o test testskbuff.c -l /usr/include/linux
Code:
In file included from /usr/include/linux/sched.h:14, from /usr/include/linux/skbuff.h:19, from testskbuff.c:1: /usr/include/linux/timex.h:173: field `time' has incomplete type In file included from /usr/include/linux/bitops.h:69, from /usr/include/asm/system.h:7, from /usr/include/linux/sched.h:16, from /usr/include/linux/skbuff.h:19, from testskbuff.c:1: /usr/include/asm/bitops.h:327:2: warning: #warning This includefile is not available on all architectures. /usr/include/asm/bitops.h:328:2: warning: #warning Using kernel headers in userspace: atomicity not guaranteed In file included from /usr/include/linux/signal.h:4, from /usr/include/linux/sched.h:25, from /usr/include/linux/skbuff.h:19, from testskbuff.c:1: /usr/include/asm/signal.h:107: syntax error before "sigset_t" /usr/include/asm/signal.h:110: syntax error before '}' token In file included from /usr/include/linux/sched.h:81, from /usr/include/linux/skbuff.h:19, from testskbuff.c:1: /usr/include/linux/timer.h:32: field `vec' has incomplete type /usr/include/linux/timer.h:37: field `vec' has incomplete type /usr/include/linux/timer.h:45: syntax error before "spinlock_t" /usr/include/linux/timer.h:53: syntax error before '}' token /usr/include/linux/timer.h:63: field `list' has incomplete type /usr/include/linux/timer.h:67: syntax error before "tvec_base_t" /usr/include/linux/timer.h:101: syntax error before "tvec_bases" /usr/include/linux/timer.h: In function `init_timer': ....
.
So can you explain me what happen and tell me what to do to run this simple program?
Are you trying to write a module? I'm not completly sure, but if you are going to use the kernel API, maybe you need the kernel.h include.
Best regards
Hi, fernape,
I don't intend to write a module (uhm, it's so complex for my purpose), i just want to test some kernel APIs. I've just added the kernel.h to my program but nothing changed
. It's the first time I test kernel APIs in this way so I'm not sure what I have to do!
I think you can not use kernel API from an userland program. Your program will be linked against the libc (as other programs did) but you need some recursive definitions for the kernel headers (that amount of symbol blah blah blah not defined...)
The other problems are related to the API itself. sk_buff is NOT a type, it is a structure, so you should declare it as: struct sk_buff my_sk_buff. More, alloc_skb returns a pointer, not a structure... read the API again. This compiles:
#include <linux/kernel.h>
#include <linux/skbuff.h>
#include <linux/module.h>
int algo()
{
struct sk_buff *socketbuff;
socketbuff = alloc_skb(12, GFP_DMA);
return 0;
}
But this does not perform anything... this is almost a module... you should use module_init and module_exit.
Maybe you want to read this:
Best regards
Hi fernape, sorry for late responding.
After a time finding solution, I think that you were right! I haven't got any idea for programming using kernel functions without using module. So I decided to start module programming.
Thank you very much for your help. It's very helpful to me.
Hihi, I've got that document. And here's my first problem in the world of kernel module:
Hope have help.
You need to install your kernel sources.
Best regards | http://www.linuxforums.org/forum/kernel/59837-test-kernel-api.html | CC-MAIN-2014-15 | refinedweb | 688 | 62.64 |
This current version of this document is available in the file doc/developers/HACKING.txt in the source tree, or at
See also: Bazaar Developer Documentation Catalog.
Before making changes, it's a good idea to explore the work already done by others. Perhaps the new feature or improvement you're looking for is available in another plug-in already? If you find a bug, perhaps someone else has already fixed it?
To answer these questions and more, take a moment to explore the overall Bazaar Platform. Here are some links to browse:
If nothing else, perhaps you'll find inspiration in how other developers have solved their challenges.
There is a very active community around Bazaar. Mostly we meet on IRC (#bzr on irc.freenode.net) and on the mailing list. To join the Bazaar community, see.
If you are planning to make a change, it's a very good idea to mention it on the IRC channel and/or on the mailing list. There are many advantages to involving the community before you spend much time on a change. These include:
In summary, maximising the input from others typically minimises the total effort required to get your changes merged. The community is friendly, helpful and always keen to welcome newcomers.
Looking for a 10 minute introduction to submitting a change? See.
TODO: Merge that Wiki page into this document.
The development team follows many practices including:
The key tools we use to enable these practices are:
For further information, see.
Bazaar supports many ways of organising your work. See for a summary of the popular alternatives.
Of course, the best choice for you will depend on numerous factors: the number of changes you may be making, the complexity of the changes, etc. As a starting suggestion though:
create a local copy of the main development branch (bzr.dev) by using this command:
bzr branch bzr.dev
keep your copy of bzr.dev prestine (by not developing in it) and keep it up to date (by using bzr pull)
create a new branch off your local bzr.dev copy for each issue (bug or feature) you are working on.
This approach makes it easy to go back and make any required changes after a code review. Resubmitting the change is then simple with no risk Bazaar Architectural Overview..
If you'd like to propose a change, please post to the bazaar@lists.canonical.com list with a bundle, patch, or link to a branch. Put [PATCH] or [MERGE] in the subject so Bundle Buggy can pick it out, and explain the change in the email message text. Remember to update the NEWS file as part of your change if it makes any changes visible to users or plugin developers. Please include a diff against mainline if you're giving a link to a branch.
You can generate a].
Please put a "cover letter" on your merge request explaining:. The core developers take care to keep the code quality high and understandable while recognising that perfect is sometimes the enemy of good.
It is easy for reviews to make people notice other things which should be fixed but those things should not hold up the original fix being accepted. New things can easily be recorded in the Bug Tracker instead.
It's normally much easier to review several smaller patches than one large one. You might want to use bzr-loom to maintain threads of related work, or submit a preparatory patch that will make your "real" change easier.
Anyone can "vote" on the mailing list by expressing an opinion. Core developers can also vote using Bundle Buggy. Here are the voting codes and their explanations.
If a change gets two approvals from core reviewers, and no rejections, then it's OK to come in. Any of the core developers can bring it into the bzr.dev trunk and backport it to maintenance branches if required. The Release Manager will merge the change into the branch for a pending release, if any. As a guideline, core developers usually merge their own changes and volunteer to merge other contributions if they were the second reviewer to agree to a change.
To track the progress of proposed changes, use Bundle Buggy. See for a link to all the outstanding merge requests together with an explanation of the columns. Bundle Buggy will also mail you a link to track just your change.
hasattr should not be used because it swallows exceptions including KeyboardInterrupt. Instead, say something like
if getattr(thing, 'name', None) is None
Please write PEP-8 compliant code.
One often-missed requirement is that the first line of docstrings should be a self-contained one-sentence summary.
We use 4 space indents for blocks, and never use tab characters. (In vim, set expandtab.)
Lines should be no more than 79 characters if at all possible. Lines that continue a long statement may be indented in either of two ways:
within the parenthesis or other character that opens the block, e.g.:
my_long_method(arg1, arg2, arg3)
or indented by four spaces:
my_long_method(arg1, arg2, arg3)
The first is considered clearer by some people; however it can be a bit harder to maintain (e.g. when the method name changes), and it does not work well if the relevant parenthesis is already far to the right. Avoid this:
self.legbone.kneebone.shinbone.toebone.shake_it(one, two, three)
but rather
self.legbone.kneebone.shinbone.toebone.shake_it(one, two, three)
or
self.legbone.kneebone.shinbone.toebone.shake_it( one, two, three)
For long lists, we like to add a trailing comma and put the closing character on the following line. This makes it easier to add new items in future:
from bzrlib.goo import ( jam, jelly, marmalade, )
There should be spaces between function paramaters, but not between the keyword name and the value:
call(1, 3, cheese=quark)
In emacs:
;(defface my-invalid-face ; '((t (:background "Red" :underline t))) ; "Face used to highlight invalid constructs or other uglyties" ; ) (defun my-python-mode-hook () ;; setup preferred indentation style. (setq fill-column 79) (setq indent-tabs-mode nil) ; no tabs, never, I will not repeat ; (font-lock-add-keywords 'python-mode ; '(("^\\s *\t" . 'my-invalid-face) ; Leading tabs ; ("[ \t]+$" . 'my-invalid-face) ; Trailing spaces ; ("^[ \t]+$" . 'my-invalid-face)); Spaces only ; ) ) (add-hook 'python-mode-hook 'my-python-mode-hook)
The lines beginning with ';' prefer class names to be concatenated capital words (TestCase) and variables, methods and functions to be lowercase words joined by underscores (revision_id, get_revision).
For the purposes of naming some names are treated as single compound words: "filename", "revno".
Consider naming classes as nouns and functions/methods as verbs.
Try to avoid using abbreviations in names, because there can be inconsistency if other people use the full name.
revision_id not rev_id or revid
Functions that transform one thing to another should be named x_to_y (not x2y as occurs in some old code.)
Python destructors (__del__) work differently to those of other languages. In particular, bear in mind that destructors may be called immediately when the object apparently becomes unreferenced, or at some later time, or possibly never at all. Therefore we have restrictions on what can be done inside them.
- If you think you need to use a __del__ method ask another developer for alternatives. If you do need to use one, explain why in a comment.
- Never rely on a __del__ method running. If there is code that must run, do it from a finally block instead.
- Never import from inside a __del__ method, or you may crash the interpreter!!
- In some places we raise a warning from the destructor if the object has not been cleaned up or closed. This is considered OK: the warning may not catch every case but it's still useful sometimes.
In some places we have variables which point to callables that construct new instances. That is to say, they can be used a lot like class objects, but they shouldn't be named like classes:
> I think that things named FooBar should create instances of FooBar when > called. Its plain confusing for them to do otherwise. When we have > something that is going to be used as a class - that is, checked for via > isinstance or other such idioms, them I would call it foo_class, so that > it is clear that a callable is not sufficient. If it is only used as a > factory, then yes, foo_factory is what I would use.
Several places in Bazaar use (or will use) a registry, which is a mapping from names to objects or classes. The registry allows for loading in registered code only when it's needed, and keeping associated information such as a help string or description..
To make startup time faster, we use the bzrlib.lazy_import module to delay importing modules until they are actually used. lazy_import uses the same syntax as regular python imports. So to import a few modules in a lazy fashion do:
from bzrlib.lazy_import import lazy_import lazy_import(globals(), """ import os import subprocess import sys import time from bzrlib import ( errors, transport, revision as _mod_revision, ) import bzrlib.transport import bzrlib.xml5 """)
At this point, all of these exist as a ImportReplacer object, ready to be imported once a member is accessed. Also, when importing a module into the local namespace, which is likely to clash with variable names, it is recommended to prefix it as _mod_<module>. This makes it clearer that the variable is a module, and these object should be hidden anyway, since they shouldn't be imported into other namespaces.
While it is possible for lazy_import() to import members of a module when using the from module import member syntax, it is recommended to only use that syntax to load sub modules from module import submodule. This is because variables and classes can frequently be used without needing a sub-member for example:
lazy_import(globals(), """ from module import MyClass """) def test(x): return isinstance(x, MyClass)
This will incorrectly fail, because MyClass is a ImportReplacer object, rather than the real class.
It also is incorrect to assign ImportReplacer objects to other variables. Because the replacer only knows about the original name, it is unable to replace other variables. The ImportReplacer class will raise an IllegalUseOfScopeReplacer exception if it can figure out that this happened. But it requires accessing a member more than once from the new variable, so some bugs are not detected right away.
The null revision is the ancestor of all revisions. Its revno is 0, its revision-id is null:, and its tree is the empty tree. When referring to the null revision, please use bzrlib.revision.NULL_REVISION. Old code sometimes uses None for the null revision, but this practice is being phased out..
All code should be exercised by the test suite. See Guide to Testing Bazaar for detailed information about writing tests. forwarder' behind. This even applies to modules and classes.
If you wish to change the behaviour of a supported API in an incompatible way, you need to change its name as well. For instance, if I add an optional keyword parameter to branch.commit - that's fine. On the other hand, if I add a keyword parameter to branch.commit which is a required transaction object, I should rename the API - i.e. to 'branch.commit_transaction'.
When renaming such supported API's, be sure to leave a deprecated_method (or _function or ...) behind which forwards to the new API. See the bzrlib.symbol_versioning module for decorators that take care of the details for you - such as updating the docstring, and issuing a warning when the old api is used.
For unsupported API's, it does not hurt to follow this discipline, but it's not required. Minimally though, please try to rename things so that callers will at least get an AttributeError rather than weird results.))) def create_repository(base, shared=False, format=None):
When you deprecate an API, you should not just delete its tests, because then we might introduce bugs in them. If the API is still present at all, it should still work. The basic approach is to use TestCase.applyDeprecated which in one step checks that the API gives the expected deprecation message, and also returns the real result from the method, so that tests can keep running..
bzrlib has a standard framework for parsing command lines and calling processing routines associated with various commands. See builtins.py for numerous examples.
There are some common requirements in the library: some parameters need to be unicode safe, some need byte strings, and so on. At the moment we have only codified one specific pattern: Parameters that need to be unicode should be checked via bzrlib.osutils.safe_unicode. This will coerce the input into unicode in a consistent fashion, allowing trivial strings to be used for programmer convenience, but not performing unpredictably in the presence of different locales.
(The strategy described here is what we want to get to, but it's not consistently followed in the code at the moment.)
bzrlib is intended to be a generically reusable library. It shouldn't write messages to stdout or stderr, because some programs that use it might want to display that information through a GUI or some other mechanism.
We can distinguish two types of output from the library:
-
Structured data representing the progress or result of an operation. For example, for a commit command this will be a list of the modified files and the finally committed revision number and id.
These should be exposed either through the return code or by calls to a callback parameter.
A special case of this is progress indicators for long-lived operations, where the caller should pass a ProgressBar object.
-
Unstructured log/debug messages, mostly for the benefit of the developers or users trying to debug problems. This should always be sent through bzrlib.trace and Python logging, so that it can be redirected by the client.
The distinction between the two is a bit subjective, but in general if there is any chance that a library would want to see something as structured data, we should make it so.
The policy about how output is presented in the text-mode client should be only in the command-line tool.
Bazaar has online help for various topics through bzr help COMMAND or equivalently bzr command -h. We also have help on command options, and on other help topics. (See help_topics.py.)
As for python docstrings, the first paragraph should be a single-sentence synopsis of the command.
The help for options should be one or more proper sentences, starting with a capital letter and finishing with a full stop (period).
All help messages and documentation should have two spaces between sentences.
Commands should return non-zero when they encounter circumstances that the user should really pay attention to - which includes trivial shell pipelines.
Recommended values are:
- OK.
- Conflicts in merge-like operations, or changes are present in diff-like operations.
- Unrepresentable diff changes (i.e. binary files that we cannot show a diff of).
- An error or exception has occurred.
- An internal error occurred (one that shows a traceback.)
Errors are handled through Python exceptions. Exceptions should be defined inside bzrlib.errors, so that we can see the whole tree at a glance.
We broadly classify errors as either being either internal or not, depending on whether internal_error is set or not. If we think it's our fault, we show a backtrace, an invitation to report the bug, and possibly other details. This is the default for errors that aren't specifically recognized as being caused by a user error. Otherwise we show a briefer message, unless -Derror was given.
Many errors originate as "environmental errors" which are raised by Python or builtin libraries -- for example IOError. These are treated as being our fault, unless they're caught in a particular tight scope where we know that they indicate a user errors. For example if the repository format is not found, the user probably gave the wrong path or URL. But if one of the files inside the repository is not found, then it's our fault -- either there's a bug in bzr, or something complicated has gone wrong in the environment that means one internal file was deleted.
Many errors are defined in bzrlib/errors.py but it's OK for new errors to be added near the place where they are used.
Exceptions are formatted for the user by conversion to a string (eventually calling their __str__ method.) As a convenience the ._fmt member can be used as a template which will be mapped to the error's instance dict.
New exception classes should be defined when callers might want to catch that exception specifically, or when it needs a substantially different format string.
Exception strings should start with a capital letter and should not have a final fullstop. If long, they may contain newlines to break the text..
When you change bzrlib, please update the relevant documentation for the change you made: Changes to commands should update their help, and possibly end user tutorials; changes to the core library should be reflected in API documentation.:
- changes to existing behaviour - the highest priority because the user's existing knowledge is incorrect
- new features - should be brought to their attention
- bug fixes - may be of interest if the bug was affecting them, and should include the bug number if any
- major documentation changes
- changes to internal interfaces
People who made significant contributions to each change are listed in parenthesis. This can include reporting bugs (particularly with good details or reproduction recipes), submitting patches, etc.
The docstring of a command is used by bzr help to generate help output for the command. The list 'takes_options' attribute on a command is used by bzr help to document the options for the command - the command docstring does not need to document them. Finally, the '.
The copyright policy for bzr was recently made clear in this email (edited for grammatical correctness):
The attached patch cleans up the copyright and license statements in the bzr source. It also adds tests to help us remember to add them with the correct text. We had the problem that lots of our files were "Copyright Canonical Development Ltd" which is not a real company, and some other variations on this theme. Also, some files were missing the GPL statements. I want to be clear about the intent of this patch, since copyright can be a little controversial. 1) The big motivation for this is not to shut out the community, but just to clean up all of the invalid copyright statements. 2) It has been the general policy for bzr that we want a single copyright holder for all of the core code. This is following the model set by the FSF, which makes it easier to update the code to a new license in case problems are encountered. (For example, if we want to upgrade the project universally to GPL v3 it is much simpler if there is a single copyright holder). It also makes it clearer if copyright is ever debated, there is a single holder, which makes it easier to defend in court, etc. (I think the FSF position is that if you assign them copyright, they can defend it in court rather than you needing to, and I'm sure Canonical would do the same). As such, Canonical has requested copyright assignments from all of the major contributers. 3) If someone wants to add code and not attribute it to Canonical, there is a specific list of files that are excluded from this check. And the test failure indicates where that is, and how to update it. 4) If anyone feels that I changed a copyright statement incorrectly, just let me know, and I'll be happy to correct it. Whenever you have large mechanical changes like this, it is possible to make some mistakes. Just to reiterate, this is a community project, and it is meant to stay that way. Core bzr code is copyright Canonical for legal reasons, and the tests are just there to help us maintain that.
Bazaar has a few facilities to help debug problems by going into pdb, the Python debugger.
If the BZR_PDB environment variable is set then bzr will go into pdb post-mortem mode when an unhandled exception occurs.
If you send a SIGQUIT.
This section discusses various techniques that Bazaar uses to handle characters that are outside the ASCII set.
When a Command object is created, it is given a member variable accessible by self.outf. This is a file-like object, which is bound to sys.stdout, and should be used to write information to the screen, rather than directly writing to sys.stdout or calling print. This file has the ability to translate Unicode objects into the correct representation, based on the console encoding. Also, the class attribute encoding_type will effect how unprintable characters will be handled. This parameter can take one of 3 values:
- replace
- Unprintable characters will be represented with a suitable replacement marker (typically '?'), and no exception will be raised. This is for any command which generates text for the user to review, rather than for automated processing. For example: bzr log should not fail if one of the entries has text that cannot be displayed.
- strict
- Attempting to print an unprintable character will cause a UnicodeError. This is for commands that are intended more as scripting support, rather than plain user review. For exampl: bzr ls is designed to be used with shell scripting. One use would be bzr ls --null --unknows | xargs -0 rm. If bzr printed a filename with a '?', the wrong file could be deleted. (At the very least, the correct file would not be deleted). An error is used to indicate that the requested action could not be performed.
- exact
- Do not attempt to automatically convert Unicode strings. This is used for commands that must handle conversion themselves. For example: bzr diff needs to translate Unicode paths, but should not change the exact text of the contents of the files.
Because Transports work in URLs (as defined earlier), printing the raw URL to the user is usually less than optimal. Characters outside the standard set are printed as escapes, rather than the real character, and local paths would be printed as file:// urls. The function unescape_for_display attempts to unescape a URL, such that anything that cannot be printed in the current encoding stays an escaped URL, but valid characters are generated where possible.
The bzrlib.osutils module has many useful helper functions, including some more portable variants of functions in the standard library.
In particular, don't use shutil.rmtree unless it's acceptable for it to fail on Windows if some files are readonly or still open elsewhere. Use bzrlib.osutils.rmtree instead.
We write some extensions in C using pyrex. We design these to work in three scenarios:
- User with no C compiler
- User with C compiler
- Developers
The recommended way to install bzr is to have a C compiler so that the extensions can be built, but if no C compiler is present, the pure python versions we supply will work, though more slowly.
For developers we recommend that pyrex be installed, so that the C extensions can be changed if needed.
For the C extensions, the extension module should always match the original python one in all respects (modulo speed). This should be maintained over time.
To create an extension, add rules to setup.py for building it with pyrex, and with distutils. Now start with an empty .pyx file. At the top add "include 'yourmodule.py'". This will import the contents of foo.py into this file at build time - remember that only one module will be loaded at runtime. Now you can subclass classes, or replace functions, and only your changes need to be present in the .pyx file.
Note that pyrex does not support all 2.4 programming idioms, so some syntax changes may be required. I.e.
- 'from foo import (bar, gam)' needs to change to not use the brackets.
- 'import foo.bar as bar' needs to be 'import foo.bar; bar = foo.bar'
If the changes are too dramatic, consider maintaining the python code twice - once in the .pyx, and once in the .py, and no longer including the .py file.
To build a win32 installer, see the instructions on the wiki page:
While everyone in the Bazaar community is welcome and encouraged to propose and submit changes, a smaller team is reponsible for pulling those changes together into a cohesive whole. In addition to the general developer stuff covered above, "core" developers have responsibility for:
Note
Removing barriers to community participation is a key reason for adopting distributed VCS technology. While DVCS removes many technical barriers, a small number of social barriers are often necessary instead. By documenting how the above things are done, we hope to encourage more people to participate in these activities, keeping the differences between core and non-core contributors to a minimum.
While it has many advantages, one of the challenges of distributed development is keeping everyone else aware of what you're working on. There are numerous ways to do this:
As well as the email notifcations that occur when merge requests are sent and reviewed, you can keep others informed of where you're spending your energy by emailing the bazaar-commits list implicitly. To do this, install and configure the Email plugin. One way to do this is add these configuration settings to your central configuration file (e.g. ~/.bazaar/bazaar.conf on Linux):
[DEFAULT] email = Joe Sm.
Of the many workflows supported by Bazaar, the one adopted for Bazaar development itself is known as "Decentralized with automatic gatekeeper". To repeat the explanation of this given on:
In this workflow, each developer has their own branch or branches, plus read-only access to the mainline. A software gatekeeper (e.g. PQM) has commit rights to the main branch. When a developer wants their work merged, they request the gatekeeper to merge it. The gatekeeper does a merge, a compile, and runs the test suite. If the code passes, it is merged into the mainline.
In a nutshell, here's the overall submission process:
Note
At present, PQM always takes the changes to merge from a branch at a URL that can be read by it. For Bazaar, that means a public, typically http, URL.
As a result, the following things are needed to use PQM for submissions:
If you don't have your own web server running, branches can always be pushed to Launchpad. Here's the process for doing that:
Depending on your location throughout the world and the size of your repository though, it is often quicker to use an alternative public location to Launchpad, particularly if you can set up your own repo and push into that. By using an existing repo, push only needs to send the changes, instead of the complete repository every time. Note that it is easy to register branches in other locations with Launchpad so no benefits are lost by going this way.
Note
For Canonical staff,<user>/ is one suggestion for public http branches. Contact your manager for information on accessing this system if required.
It should also be noted that best practice in this area is subject to change as things evolve. For example, once the Bazaar smart server on Launchpad supports server-side branching, the performance situation will be very different to what it is now (Jun 2007).
While not strictly required, the PQM plugin automates a few things and reduces the chance of error. Before looking at the plugin, it helps to understand a little more how PQM operates. Basically, PQM requires an email indicating what you want it to do. The email typically looks like this:
star-merge source-branch target-branch
For example:
star-merge
Note that the command needs to be on one line. The subject of the email will be used for the commit message. The email also needs to be gpg signed with a key that PQM accepts.
The advantages of using the PQM plugin are:.
New features typically require a fair amount of discussion, design and debate. For Bazaar, that information is often captured in a so-called "blueprint" on our Wiki. Overall tracking of blueprints and their status is done using Launchpad's relevant tracker,. Once a blueprint for ready for review, please announce it on the mailing list.
Alternatively, send an email)
Keeping on top of bugs reported is an important part of ongoing release planning. Everyone in the community is welcome and encouraged to raise bugs, confirm bugs raised by others, and nominate a priority. Practically though, a good percentage of bug triage is often done by the core developers, partially because of their depth of product knowledge.
With respect to bug triage, core developers are encouraged to play an active role with particular attention to the following tasks:
Note
As well as prioritizing bugs and nominating them against a target milestone, Launchpad lets core developers offer to mentor others in fixing them. | http://doc.bazaar.canonical.com/bzr.1.12/en/developer-guide/HACKING.html | CC-MAIN-2018-05 | refinedweb | 4,884 | 63.49 |
For developers whose skills and experience centre around Windows or Web-based development, building an iPhone app can seem like a daunting proposition involving learning new libraries, languages and tools, and buying new hardware. However, for a large class of apps, it is possible to take advantage of your existing knowledge and skills to easily create powerful, full-featured utilities that have a native look and feel. In this article, I'll explore the process of building an app using the standard web technologies of HTML, CSS, jQuery and jQuery Mobile, developed on a PC running Windows.
For this article, I created a Task Timer utility to illustrate the type of app that can be built.
This Task Timer app can be used to track how your time is spent throughout the day or week. For example, you could create a number of different tasks - Research, Development, Documentation, Marketing and Support - and choose the most suitable for each task you work on. I use the app for recording how long I spend on each of my client's projects for billing purposes.
Here's what it looks like:
The active task is identified with a pulsating orange/brown colour and its duration, on the right-hand side - in hours:minutes:seconds format - increases with every passing second that the task remains active. Tapping a row will activate a different task, and a new task can be added by pressing the button in the top right corner.
hours:minutes:seconds
You can access options for deleting, renaming, or resetting a task's times by clicking the button in the top-left corner, which will animate the screen in the familiar iOS way as it switches into editing mode.
While in editing mode, clicking the "No Entry Sign" exposes the task's delete button , and pressing the right-facing arrow reveals the Edit Task screen.
If you have an iPhone, you can see a live demo of the Task Timer app using Mobile Safari. Alternatively, open the link in Chrome on a PC for a similar experience.
Please note that since the app was designed to run only on iPhone, no effort has been invested designing for cross browser compatibility.
When you visit the link using Mobile Safari, the page will load as you'd expect for any website - within Safari's frame - and the address and toolbar will be present. To make the web page look and behave like a real iOS app, it must first be added to the Home Screen.
Click the options button and choose Add to Home Screen. This adds the app's icon to the iPhone home screen making it indistinguishable from other apps installed from the App Store:
Now that the app is installed, it can be launched by clicking its icon. A splash screen will be briefly shown then the app will fill the screen completely, showing no sign of Safari's address bar or buttons.
The Task Timer app is built entirely using common web technologies. HTML is used for structure, jQuery Mobile and CSS for layout and styling, and JavaScript (and JQuery) to add behaviour.
The app was built using Visual Studio 2012 and the fantastic Resharper plugin. All images were created using the free image editor Paint.NET with the BoltBait plugin pack, which adds additional menu options to Paint.NET, such as Color Balance, Bevel Selection and Feather Selection.
The combination of Visual Studio and Resharper greatly assist the development process by providing useful tools such as syntax highlighting, autocomplete, easy project navigation and early error detection, but these tools aren't essential and a simple text editor like notepad or notepad++ would suffice.
This article won't assume any tools other than a basic text editor for writing the code, Chrome for testing and diagnostics, and access to an image editor, like Paint.NET.
To build the Task Timer, our first goal is to display a page containing a list of tasks, which should look and feel familiar to an iPhone user.
jQuery Mobile makes it trivially simple to produce the type of screen that is common on touch-capable mobile devices. The framework will effortlessly style native HTML elements, such as <button>, <a href=""> and <ul>, so they look and act like common mobile controls - buttons and links large enough to be targeted with a finger rather than a more precise mouse pointer and lists that scroll with a flick of a finger.
<button>
<a href="">
<ul>
The framework will also convert standard <a> links to use AJAX to fetch the target content, which removes the visible flicker associated with a normal HTTP GET page request. The target page can be configured to appear with a built-in transition, such as slideup, turn, fade or pop.
<a>
A jQuery Mobile page is built declaratively, using HTML data- annotations which are used by the framework to influence how the elements are transformed into mobile-friendly controls.
data-
Try building your own app by copying the HTML below into a file called index.html:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Task Timer</title>
<link href="" rel="stylesheet"/>
<link href="" rel="stylesheet"/>
<script src=""></script>
<script src=""></script>
</head>
<body>
<div id="tasksPage" data-
<div data-
<a href="#" data-Edit</a>
<h1>Task Timer</h1>
<a href="#" data-Add</a>
</div>
<div data-
<ul id="taskList" data-
<li>Task 1</li>
<li>Task 2</li>
</ul>
</div>
<div data-
<a href="#" data-About</a>
</div>
</div>
</body>
</html>
Save and open this file in a browser and you'll see a screen like this:
We now have the basics of an app, all from a few lines of markup with no custom CSS or JavaScript.
Notice how the <div data- element has been automatically styled like an iOS app header using a subtle gradient, and the <ul data- markup has become a full-width, styled list that can be scrolled up or down with a flick of a finger.
<div data-
<ul data-
jQuery Mobile performs its magic by manipulating the document object model (DOM) - produced from the raw HTML - before the page is displayed in the browser. Additional elements and CSS classes are automatically added to the DOM to give it an app-like look and feel.
You can see the transformation that takes place by comparing the bare document object model in the top image to the one resulting from the JQuery Mobile transformation, below:
jQuery Mobile has powerful support for re-skinning its controls with a well-designed suite of CSS classes. The best starting point for styling your app and applying your own branding, is the jQuery Mobile ThemeRoller. If you've ever used the Themeroller for JQuery UI, the point and click interface should be familiar.
You can either use the default theme as your starting point or import an existing one, as we will now do.
Copy the contents of the Task Timer Theme to the clipboard, then navigate to and click the Import or Upgrade button at the top of the screen. This will load the existing styles into the editor allowing you to make changes.
JQuery Mobile supports multiple swatches, each can have a distinct style for a particular purpose.
The Task Timer app uses four swatches - A, B, C and D. I've chosen to use A as the primary style with the others being used to highlight particular elements. You can add as many swatches as needed and there are no constraints on which to use for a particular purpose.
After configuring the themes, click the Download button on the ThemeRoller to receive a ZIP file containing the updated CSS and an example HTML file.
Swatches are applied to individual screen elements by adding a data-theme attribute.
data-theme
Below, you can see the index.html file from earlier with the data-theme="a" attribute applied to the page's <div> element and data-theme="c" to each button:
data-theme="a"
<div>
data-theme="c"
Before we go much further, let's discuss the tools used to inspect the app as it is being built. Regularly feedback is essential during development and it is vital that, when unexpected behaviour is encountered, we have the means of digging deeper and diagnosing the underlying issue.
I find Chrome to be the best choice of browser for the large part of development and, by installing the useful Window Resizer extension, we can set the browser window's dimensions for a good approximation of how the page will look on an iPhone.
Chrome's built-in diagnostic tools are excellent for inspecting the document object model (DOM), examining element's CSS styles, stepping through JavaScript, and for diagnosing performance issues using its built in profiler.
Although using Chrome is a quick way to get an accurate understanding of how the app will look and operate on iOS, there is no substitute for the real thing and it is useful to configure a website in IIS to allow the app to be accessed via Wi-Fi on the local network from an iPhone running Safari. The web rendering engine inside Chrome currently has the same core (i.e. WebKit) as Mobile Safari (although, this will change in the future), and although it is unusual to see differences in the way pages are rendered, it is still important to regularly test the app in its native environment to ensure performance is acceptable and behaviour is as designed.
Now we've created the main page, the Task Timer is beginning to look like a real app, but this is in appearance only. jQuery Mobile handles the general page's appearance but to bring the app to life, we must add behaviour using JavaScript with a little help from JQuery.
If you take a look at the finished app you'll see that, in addition to jQuery and jQuery mobile, it refers to three JavaScript files: taskTimer.utils.js, taskTimer.model.js and taskTimer.ui.js (see below).
<html>
<head>
<!-- some tags removed -->
<script src=""></script>
<script src=""></script>
<script type="text/javascript" src="scripts/taskTimer.utils.js"></script>
<script type="text/javascript" src="scripts/taskTimer.model.js"></script>
<script type="text/javascript" src="scripts/taskTimer.ui.js"></script>
These three scripts provide the interaction behaviour, input validation, model persistence and formatting logic that constitute the app. Their responsibilities and relationships are shown in the diagram below:
The UI module is the only user-facing layer. It is responsible for responding to events generated by the user, such as screen taps and swipes, and it must present information held in the Model to the user by manipulating the browser's DOM. The UI layer is also responsible for preventing inappropriate actions being performed, such as editing an empty task list, which, in this particular case, is achieved by hiding the Edit button when the last list item is removed.
UI
Model
The Model is an in-memory, conceptual representation of the tasks, including the periods in which each task has been active. In addition to containing a set of structured data, the Model is also responsible for controlling access to that data - fully encapsulating it - to prevent unnatural states from occurring, such as having a period of time with an end but no start. In other words, the model is a classic domain model.
The Utils module is a collection of stateless methods (pure functions), generally containing logic for manipulating time and dates, which is used by both the UI and the Model layers.
Utils
Let's take a look at an example Model containing two tasks, the first of which is active:
Each task is uniquely identified by a GUID, which is generated using a simple algorithm. Using a GUID permits the task's name to be changed without affecting any references to it.
Each task has a list of active periods with each having a start and end date, represented as the number of milliseconds that have elapsed since 1st January 1970. If a task is currently active, it's end date will be null. The ID of the active task is held in activeTaskId.
activeTaskId
There are many alternative ways that could have been chosen to represent a task inside the Task Timer. A simpler model might have stored the total number of elapsed seconds as a single value, rather than having separate values for the start and end times. In one way this would be beneficial because we avoid having to calculate the difference between the two to produce the elapsed time. However, this simpler model would not work when we consider how iOS manages memory.
In iOS, when an app is on-screen and being interacted with, it is given the ability to execute instructions, update the screen and respond to events but, when another app is activated, or the screen becomes locked after a set period of time, our app will be frozen and will be evicted from memory if it runs low. iOS generally expects apps to be able to persist their state so they may be resumed at a later time. This gives the OS the freedom to reuse resources as deemed appropriate to conserve battery life and offer the best user experience.
The Model shown above was chosen because it is ideal for persisting state that can be reloaded later, presenting the illusion that the app was constantly running throughout.
Take a look at the Resources tab on Chrome's Developer Window (press F12 to activate), and you'll see a string of JSON stored against the blackjetsoftware.com domain, under the tasks key
tasks
Local Storage is a newish web technology that is now supported by all modern browsers, including Chrome and Mobile Safari. It is a key-value, client-side-only store that is isolated by domain. This is a remarkably useful technology, considering its simplicity and, as you can see from the image above, it is used by the Task Timer app to keep track of tasks and their times.
When the Task Timer is launched, an attempt is made to read the local storage value identified by the tasks key and deserialise it to the in-memory model. If the key doesn't exist, a model will be created containing no tasks.
if (localStorage.tasks) {
var persistedModel = JSON.parse(localStorage.tasks);
// ...
}
Whenever, a task is created, edited, deleted, or when the active task is changed, the in-memory model is serialised to a string and written back to localstorage:
localstorage
localStorage.tasks = JSON.stringify(window.taskTimer.model);
This simple persistent store permits the Task Timer to present the illusion that it is continually running and tracking elapsed time against the active task. Because this data is stored in the browser on the client machine, it is completely private and cannot be accessed by other apps or devices. It is never sent to the server and the browser will enforce domain-level isolation so other websites cannot access it either.
Take a look at the JavaScript files, for example taskTimer.model.js, and you'll see they follow the same general structure:
taskTimer.model.js
This is the JavaScript Module Pattern. It provides all-important encapsulation for code modules to protect their internal data structures from being accessed accidentally, or deliberately, from outside code. This separation of interface and implementation is essential for building solid code whose behaviour can be reasoned through and is adaptable for future change. By using this pattern, only the module itself is visible from the global namespace (window.taskTimer.model) and its 'insides' are selectively exposed, via the return structure, to be used by other code.
window.taskTimer.model
If your background is like mine - using languages that are strongly class based, like Java or C# - it can take a while to internalise this style of encapsulation to the point where it is second nature to use. However, I'd fully recommend persevering so you can create modules with crisply-defined responsibilities and low coupling that are easy to comprehend and reason through. This is key to maintainability and agility.
A module like the taskTimer.model, where the code doesn't read from or update the user interface, is an ideal candidate for unit testing. Its public interface is likely to be fairly stable over time, since if it did change significantly the Task Timer has become something else. Also, models often contain complex logic that must be verified directly to cover all scenarios and edge cases and this can be tricky and time consuming to set up through the user interface alone. These two factors together suggest that the benefit of writing and maintaining an automated test suite for this module will be greater than its cost over the lifetime of the app.
taskTimer.model
All major languages and frameworks now have open source unit testing libraries available and JavaScript is no exception having many frameworks to choose from. For this project I chose to use QUnit, developed and maintained by the jQuery team.
Tests are defined in QUnit slightly differently to the usual xUnit pattern, where each test is a parameterless, non-returning method, and instead use a more idiomatic JavaScript approach of using anonymous functions.
Here is a sample of some of the tests in the Task Timer project to give you an idea how they're constructed:
var model = window.taskTimer.model;
test("adding task generates a non blank Guid", function () {
var task = model.addTask('Task 1');
console.log('new id = ' + task.getId());
notEqual(task.getId(), '');
notEqual(task.getId(), null);
});
test("getTaskById returns task if it exists", function () {
var task1 = model.addTask('Task 1');
var task1Id = task1.getId();
deepEqual(model.getTaskById(task1Id), task1);
});
test("getTaskById returns null if task doesn't exist", function () {
deepEqual(model.getTaskById('0'), null);
});
All standard assertion methods are available in QUnit, such as equal, deepEqual notEqual, and being able to use a human-readable test description is a big advantage over tools like nUnit that require test names to be compiler friendly.
equal
deepEqual
notEqual
You can see the live test results for the Task Timer unit test suite by visiting the unit tests page
There are several ways to specify which image should be used as the iOS app icon. The two main ones are:
apple-touch-icon.png
<link rel="apple-touch-icon" href="/my-icon.png" />
<head>
To Support Retina iPhone models (iPhone 4, 4S and 5), this icon should be 114x144 pixels with 24-bit colour.
Once an image has been specified, it will be used instead of the usual mini picture of the page when the app is added to the home screen.
You may notice your image has a glassy bevel effect applied when it appears in iOS. This effect can improve your image's appearance and make it feel more professional and three-dimensional, but some images will look worse, particularly those with their own lighting effects. For these, you can force iOS to use an unadulterated version by adding a -precomposed suffix to your image name or the rel attribute value.
-precomposed
rel
Below is a pure black 114x114 icon with the standard iOS glossy bevel effect applied, produced by naming image apple-touch-icon.png or using the link tag: <link rel="apple-touch-icon" href="/my-icon.png" />
And here is the same pure black 114x114 icon without the effect, produced by naming image apple-touch-icon-precomposed.png or using link tag: <link rel="apple-touch-icon-precomposed" href="/my-icon.png" />
apple-touch-icon-precomposed.png
<link rel="apple-touch-icon-precomposed" href="/my-icon.png" />
If you study them closely, you'll notice the built in iOS Apple apps use a mix of pre-composed and non-pre-composed icons. Contacts, Calculator, Calendar and Notes apps are all pre-composed (i.e. no glossy bevel) but Messages, App Store, Phone and Music apps use the standard beveled icon.
Although the Task Timer icon has a bevel, it is a manually-applied effect included in the image itself. This gives a more subtle effect than the one applied automatically by iOS, whilst still appearing like a classic iOS app. The icon was built in Paint.NET by taking the initial app image, of the checklist and stopwatch, and overlaying it with a 50% transparent layer containing the black glossy beveled image shown above. This was then blended using Screen mode to produce the finished icon:
Take a look at the pre-flattened Paint.NET icon file to see all layers used to construct the image.
The current version of the Task Timer app runs well within the confines of the browser but it's conceivable that a future version might need to access the device's hardware or services to provide new functionality. Perhaps, it would be useful to record the location at which a task was created using GPS, or support recording audio and video clips and attaching these to a Task for reference. Fortunately, for apps that need access to the iPhone's native hardware and services, there are frameworks available that can bridge the gap from browser to OS. One popular framework is PhoneGap. This works by running your HTML, CSS and JavaScript within a special webview control that augments the DOM's API with additional functions to expose the underlying device's features.
This example shows how easy it is to retrieve the device's current location using the PhoneGap API from JavaScript:
var onSuccess = function(position) {
alert('Latitude=' + position.coords.latitude + ', Longitude=' + position.coords.longitude);
};
var onError = function(error) {
alert(error.message);
}
navigator.geolocation.getCurrentPosition(onSuccess, onError);
As a bonus, an app built with PhoneGap can be submitted to the iOS App Store and installed on the user's device, so it can be easily be discovered by users, monetised and run without needing a network connection.
Using jQuery Mobile together with common web technologies is a fantastic way to utilise existing skills to produce mobile applications that look and feel very similar to native apps. The availability of frameworks such as PhoneGap and Titanium offer a path for evolving your app so it may use native iPhone services and hardware without having to undergo an extensive rewrite.
It should be noted that using a web-based technology stack as described in this article is not suitable for all types of app. The additional layers of abstraction involved do reduce the potential for optimising the app's performance and I wouldn't recommend using this approach for any app that requires intensive dynamically rendered graphics, such as games or 3D modeling tools, but for many types of app, particularly server-based business apps and small utilities that don't need extensive processing capability on the device, using jQuery Mobile, HTML, CSS and JavaScript is an excellent choice for efficiently producing native-looking apps using commonly available tools and skills.
I hope this article has offered a glimpse of what's possible with browser-based iPhone apps and demonstrates the simplicity of building a basic iOS app.
Good luck building your first iPhone app and I look forward to hearing about your experiences!. | http://www.codeproject.com/Articles/579532/Building-an-iPhone-App-using-jQuery-Mobile | CC-MAIN-2014-35 | refinedweb | 3,871 | 57.1 |
Is there a function in any of the standard libraries that does this, or do I have ot create my own?
Printable View
Is there a function in any of the standard libraries that does this, or do I have ot create my own?
You mean take a null terminated array of characters that contains the textual representation of a floating point number and return said floating point number?
The function is atof() //(ascii to float)
It takes a pointer to the array of char and returns a double precision floating pooint value.
It is found in <cstdlib>.
Yep, that's what I needed, thanks.
strtod() would be better choice.
So... what's the second paramiter of strtod() supposed to be?
Did you look it up?Did you look it up?Quote:
Originally posted by Nippashish
So... what's the second paramiter of strtod() supposed to be?
What am I saying! This is a C++ board, use a stringstream.
Although if you still want info on strtod() go here:
An example of a stringstream converting a char * to a float:
What part of Nova Scotia do ye hail from?What part of Nova Scotia do ye hail from?Code:
#include <iostream>
#include <sstream>
using namespace std;
int main ()
{
float val;
stringstream ss (stringstream::in | stringstream::out);
ss << "1.2";
ss >> val;
cout << val*2 << endl;
return 0;
}
Was thinking the same thing, Elbro.
Why are you using a nul terminated array of char anyway, nippa? Strings are almost always better.
I never really used strings before so it just never occured to me, I'm using them now though, and it's making my life a hell of a lot easier :D.
And I'm from the Bridgewater area. | http://cboard.cprogramming.com/cplusplus-programming/27867-char-array-float-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 289 | 84.17 |
Holy cow, I wrote a book!
The answer depends on which "hard drive almost full"
warning you're talking about.
Note that these boundaries are merely the current implementation
(up until Windows 7).
Future versions of Windows reserve the right to change the thresholds.
The information provided is for entertainment purposes only.
The thermometer under the drive icon in My Computer uses
a very simple algorithm:
A drive is drawn in the warning state when it is 90% full.
The low disk space warning balloon is more complicated.
The simplified version is that it warns of low disk space
on drives bigger than about 3GB
when free disk space drops below
200MB.
The warnings become more urgent when free disk space drops below 80MB,
50MB, and finally 1MB.
(For drives smaller than 3GB, the rules are different,
but nobody—to within experimental error—has
hard drives that small anyway, so it's pretty much dead code now.)
These thresholds cannot be customized,
but at least you can
turn off the low disk space balloons..
James Risto asks,
"Is there a way to change the behavior of the CMD.EXE window?
I would like to add a status line."
The use of the phrase "the CMD.EXE window" is ambiguous.
James could be referring to the console itself, or he could be
referring to the CMD.EXE progarm.
The program running in a console decides what appears in the console.
If you want to devote a line of text to a status bar, then feel free
to code one up.
But if you didn't write the program that's running,
then you're at the mercy of whatever that program decided to display.
Just to show that it can be done, here's a totally useless console
program that contains a status bar.
#define UNICODE
#define _UNICODE
#include <windows.h>
#include <strsafe.h> // for StringCchPrintf
void DrawStatusBar(HANDLE hScreen)
{
CONSOLE_SCREEN_BUFFER_INFO sbi;
if (!GetConsoleScreenBufferInfo(hScreen, &sbi)) return;
TCHAR szBuf[80];
StringCchPrintf(szBuf, 80, TEXT("Pos = %3d, %3d"),
sbi.dwCursorPosition.X,
sbi.dwCursorPosition.Y);
DWORD dwWritten;
COORD coDest = { 0, sbi.srWindow.Bottom };
WriteConsoleOutputCharacter(hScreen, szBuf, lstrlen(szBuf),
coDest, &dwWritten);
}
Our lame-o status bar consists of the current cursor position.
Notice that the console subsystem does not follow the GDI convention
of
endpoint-exclusive rectangles.
int __cdecl wmain(int argc, WCHAR **argv)
{
HANDLE hConin = CreateFile(TEXT("CONIN$"),
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL, OPEN_EXISTING, 0, NULL);
if (hConin == INVALID_HANDLE_VALUE) return 1;
HANDLE hConout = CreateFile(TEXT("CONOUT$"),
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL, OPEN_EXISTING, 0, NULL);
if (hConout == INVALID_HANDLE_VALUE) return 1;
We start by getting the handles to the current console.
Since we are a fullscreen program, we don't rely on stdin and stdout.
(How do you position the cursor on a redirected output stream?)
HANDLE hScreen = CreateConsoleScreenBuffer(
GENERIC_READ | GENERIC_WRITE,
0, NULL, CONSOLE_TEXTMODE_BUFFER, NULL);
if (!hScreen) return 1;
SetConsoleActiveScreenBuffer(hScreen);
We create a new screen buffer and switch to it, so that our
work doesn't disturb what was previously on the screen.
DWORD dwInMode;
GetConsoleMode(hConin, &dwInMode);
We start by retrieving the original console input mode
before we start fiddling with it,
so we can restore the mode when our program is finished.
SetConsoleCtrlHandler(NULL, TRUE);
SetConsoleMode(hConin, ENABLE_MOUSE_INPUT |
ENABLE_EXTENDED_FLAGS);
We set our console control handler to NULL
(which means "don't terminate on Ctrl+C")
and enable mouse input on the console because we're going to
be tracking the mouse position in our status bar.
NULL
CONSOLE_SCREEN_BUFFER_INFO sbi;
if (!GetConsoleScreenBufferInfo(hConout, &sbi)) return 1;
COORD coDest = { 0, sbi.srWindow.Bottom - sbi.srWindow.Top };
DWORD dw;
FillConsoleOutputAttribute(hScreen,
BACKGROUND_BLUE |
FOREGROUND_BLUE | FOREGROUND_RED |
FOREGROUND_GREEN | FOREGROUND_INTENSITY,
sbi.srWindow.Right - sbi.srWindow.Left + 1,
coDest, &dw);
We retrieve the screen buffer dimensions and draw a blue status
bar at the bottom of the screen.
Notice that the endpoint-inclusive rectangles employed by the
console subsystem result in what look like off-by-one errors.
The bottom line of the screen is Bottom - Top,
which in an endpoint-exclusive world would be the height of the
screen, but since the rectangle is endpoint-inclusive,
this is actually the height of the screen minus 1,
which puts us at the bottom line of the screen.
Similarly Right - Left is the width of the screen
minus 1, so we have to add one back to get the width.
Bottom - Top
Right - Left
DrawStatusBar(hScreen);
Draw our initial status bar.
INPUT_RECORD ir;
BOOL fContinue = TRUE;
while (fContinue && ReadConsoleInput(hConin, &ir, 1, &dw)) {
switch (ir.EventType) {
case MOUSE_EVENT:
if (ir.Event.MouseEvent.dwEventFlags & MOUSE_MOVED) {
SetConsoleCursorPosition(hScreen,
ir.Event.MouseEvent.dwMousePosition);
DrawStatusBar(hScreen);
}
break;
case KEY_EVENT:
if (ir.Event.KeyEvent.wVirtualKeyCode == VK_ESCAPE) {
fContinue = FALSE;
}
break;
}
}
This is the console version of a "message loop":
We read input events from the console and respond to them.
If the mouse moves, we move the cursor to the mouse position and
update the status bar.
If the user hits the Escape key, we exit the program.
SetConsoleMode(hConin, dwInMode);
SetConsoleActiveScreenBuffer(hConout);
return 0;
}
And when the program ends, we clean up: Restore the original
input mode and restore the original screen buffer.
If you run this program, you'll see a happy little status bar
at the bottom whose contents continuously reflect the cursor
position, which you can move by just waving the mouse around.
If you want a status bar in your console program,
go ahead and draw it yourself.
Of course, since it's a console program, your status bar
is going to look console-y since all you have to work with
are rectangular character cells.
Maybe you can make use of those fancy line-drawing characters.
Party like it's 1989!
When you change folders in a common file dialog,
the common file dialog calls SetCurrentDirectory
to match the directory you are viewing.
(Don't make me bring back the Nitpicker's Corner.)
SetCurrentDirectory
Okay, the first reaction to this is,
"What? I didn't know it did that!"
This is the other shoe dropping in the
story of the curse of the current directory.
Now the question is, "Why does it do this?"
Actually, you know the answer to this already.
Many programs require that the current directory match the
directory containing the document being opened.
Now, it turns out, there's a way for you to say,
"No, I'm not one of those lame-o programs.
I can handle current directory being different from the
document directory.
Don't change the current directory when using a common file dialog."
You do this by passing the OFN_NOCHANGEDIR flag.
(If your program uses the
IFileDialog interface, then
NOCHANGEDIR is always enabled.
Hooray for progress.)
OFN_NOCHANGEDIR
IFileDialog
NOCHANGEDIR
But now that you know about this second curse, you can actually use it
as a counter-curse against the first one.
If you determine that a program is holding a directory open,
and you suspect that it is the victim of the curse of the current directory,
you can go to that program and open a common file dialog.
(For example, Save As.)
From that dialog, navigate to some other directory you don't plan on
removing, say, the root of the drive, or your desktop.
Then cancel the dialog.
Since the common file dialog changes the current directory,
you have effectively injected a SetCurrentDirectory
call into the target process,
thereby changing it from the directory you want to remove.
Note, however, that this trick works only if the application
in question omits the OFN_NOCHANGEDIR flag
when it calls GetSaveFileName.
GetSaveFileName
In Explorer, you can easily call up a common file dialog by
typing Win+R then clicking Browse, and in versions of Windows
up through Windows XP, Explorer didn't pass the
OFN_NOCHANGEDIR flag.
We saw last time
that your debugging code can be a security vulnerability
when you don't control the current directory.
A corollary to this is that your delayload code can also be
a security vulnerability, for the same reason.
When you use
the linker's delayload functionality
to defer loading a DLL until the first time it is called,
the linker injects code which calls LoadLibrary
on a DLL the first time you call a function in it,
and then calls GetProcAddress on the functions
you requested.
When you call a delay-loaded function and the delayload code
did not get a function pointer from GetProcAddress
(either because the DLL got loaded but the function does not exist,
or because the DLL never got loaded in the first place),
it raises a special exception indicating that a delayed load failed.
LoadLibrary
GetProcAddress
Let's look again at the order in which the LoadLibrary
function searches for a library:
The code which implements the delayload functionality uses
a relative path when it passes the library name to LoadLibrary.
(It has no choice since all it has to work with is the library name
stored in the IMPLIB.)
Consequently, if the DLL you are delay-loading does not exist in
any of the first four search locations, the LoadLibrary
will look in location 5: the current directory.
At this point, the current directory attack becomes active,
and a bad guy can inject an attack DLL into your process.
For example,
this sample code
uses delayload to detect whether the functions in
dwmapi.dll exist, calling them if so.
If the function IsThemeEnabled is not available,
then it treats themes as not enabled.
If the program runs on a system without dwmapi.dll,
then the delayload will throw an exception,
and the exception is caught and turned into a failure.
Disaster avoided.
dwmapi.dll
IsThemeEnabled
But in fact, the disaster was not avoided; it was introduced.
If you run the program on a system without dwmapi.dll,
then a bad guy can put a rogue copy of dwmapi.dll into
the current directory,
and boom your process just loaded an untrusted DLL.
Game over.
Using the delayload feature to probe for a DLL
is morally equivalent to using a plain LoadLibrary to probe
for the presence of a debugging DLL.
In both cases, you are looking for a DLL with the expectation that
there's a good chance it won't be there.
But it is exactly in those sometimes it won't be there
cases where you become vulnerable to attack.
If you want to probe for the existence of a DLL,
then you need to know what directory the DLL should
be in,
and then load that DLL via that full path
in order to avoid the current directory attack.
On the other hand, if the DLL you want to delayload is
known to be installed in a directory ahead of the current
directory in the search path
(for example, you require versions of the
the operating system in which the DLL is part of the mandatory install,
and the directory in which it is installed is the System32 directory)
then you can use delayload.
In other words, you can use delayload for delaying the load of a DLL.
But if you're using delayload to probe for a DLL's existence,
then you become vulnerable to a current directory attack.
This is one of those subtle unintended consequences of changing
the list of files included with an operating system.
If you take what used to be a mandatory component that can't
be uninstalled,
and you change it to an optional component that can be
uninstalled,
then not only do programs which
linked to the DLL in the traditional manner stop loading,
but you've also introduced a security vulnerability:
Programs which had used delayload under the calculation
(correct at the time it was made) that doing so was safe
are now vulnerable to the current directory attack..
DebugHooks.dll
OnDocLoading
IStream
But this debugging code is also a security vulnerability.
Recall that the library search path searches directories
in the following order:.
LitWare Writer
*.LIT
ABC.LIT
ABC.LTC
ABC.LDC
ABC.LLT
C:\PROPOSAL\ABC.LIT
C:\PROPOSAL
CreateProcess."
chdir
LWW ABC.LIT
LWW C:\PROPOSAL\ABC.LIT:
I wrote this series of entries nearly two years ago,
and even then,
I didn't consider this to be anything particularly groundbreaking,
but apparently some people rediscovered it a few months ago
and are falling all over themselves to claim credit
for having found it first.
It's like a new generations of teenagers who think they invented sex..
The holding my directory open,
and how do I get it to stop?
The process cannot access the file
because it is being used by another process.
someapp.exe?
explorer.exe.
SetCurrentDirectory.
C:\Previous
C:\Victim
.)
TerminateThread..) | http://blogs.msdn.com/b/oldnewthing/archive/2010/11.aspx?PageIndex=2 | CC-MAIN-2015-27 | refinedweb | 2,101 | 53.61 |
detach
Percentile
Detach Objects from the Search Path
Detach a database, i.e., remove it from the
search()
path of available R objects. Usually this is either a
data.frame which has been
attached or a
package which was attached by
library.
Usage
detach(name, pos = 2L, unload = FALSE, character.only = FALSE, force = FALSE)
Arguments
- name
The object to detach. Defaults to
search()[pos]. This can be an unquoted name or a character string but not a character vector. If a number is supplied this is taken as
pos.
- pos
Index position in
search()of the database to detach. When
nameis a number,
pos = nameis used.
- unload
A logical value indicating whether or not to attempt to unload the namespace when a package is being detached. If the package has a namespace and
unloadis
TRUE, then
detachwill attempt to unload the namespace via
unloadNamespace: if the namespace is imported by another namespace or
unloadis
FALSE, no unloading will occur.
- character.only
a logical indicating whether
namecan be assumed to be a character string.
- force
logical: should a package be detached even though other attached packages depend on it?
Details.
Value
The return value is invisible. It is
NULL when a
package is detached, otherwise the environment which was returned by
attach when the object was attached (incorporating any
changes since it was attached).
Note.
Good practice.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
See Also
attach,
library,
search,
objects,
unloadNamespace,
library.dynam.unload .
Aliases
- detach
Examples
library(base)
# NOT RUN { require(splines) # package detach(package:splines) ## or also library(splines) pkg <- "package:splines" # } # NOT(substitute(db)) attach(db, pos = pos, name = name) print(search()[pos]) detach(name, character.only = TRUE) } attach_and_detach(women, pos = 3) # } | https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/detach | CC-MAIN-2018-51 | refinedweb | 296 | 51.04 |
You can subscribe to this list here.
Showing
1
results of 1
Hi Scott,
Please excuse the delay in replying... I'm not sure *how* easy it is =
relative to EmbPerl, but it's definitely possible in Spyce as well, and =
it's not very difficult. Do something like this. (Warning: I'm just =
typing this in, and it's not tested)
[[\
def default(value, def=3D''):
if value=3D=3DNone: return def
return value
]]
Then, you could write something like:
<input name=3Dfoo value=3D"[[=3Ddefault(request.get1('foo', =
'mydefault'))]]">
If you are using post data, then simply pass in request.post1('foo') =
above.
You could do something similar for selection form elements, perhaps =
using a function like:
[[
def defaultSelect(value, selection):
if value=3D=3DNone: return ''
if value=3D=3Dselection: return 'selected'
return false
]]
Similarly, you could deal with checkboxes, and other form elements. =
Admittedly, it would be nice for this to be encoded as a module. I'll =
put it on the todo list. If you come up with a nice set of functions, =
which you think are convenient and represent the entire spectrum of HTML =
form capabilities, please think about making it into a module, and =
making it available to the community.
In general, I have been thinking about binding forms to data. I hope to =
(when I find the time) to also create a tag library that implements a =
similar functionality, and to perform this binding in a more general =
manner than I just described, which will also encapsulate form =
validation. Stay tuned...
All the best,
Rimon.
----- Original Message -----=20
From: "Scott Chapman" <scott_list@...>
To: <spyce-users@...>
Sent: Saturday, February 08, 2003 10:39 PM
Subject: [Spyce-users] Pre-populating forms
Is there an easy way to pre-populate a form?
With EmbPerl, you can populate the %fdat{} hash (python =3D dictionary) =
and form=20
items are pre-populated with the relevant entry in %fdat. So if you have =
a=20
form input name=3D"email" and you put an email address in $fdat{email} =
it will=20
show up as the value in the form in the user's web browser. The user's =
data=20
coming from the form are found in %fdat{} also. This is a VERY =
convenient=20
mechanism.
I've read the Spyce docs and I don't see that this can be done the same =
way=20
here. Has anyone made a module to do this or did I miss something?
Failing all that, can this be added to Spyce?
Cordially,
Scott | http://sourceforge.net/p/spyce/mailman/spyce-users/?viewmonth=200302&viewday=13 | CC-MAIN-2015-40 | refinedweb | 423 | 62.38 |
Created on 2012-02-28 04:01 by Rich.Rauenzahn, last changed 2015-09-28 03:30 by terry.reedy. This issue is now closed.
Using 64bit python for windows downloaded from python.org on 64bit windows 7.
Python Version 3.2.2
Tk version 8.5
IDLE version 3.2.2
When stepping through code the corresponding line in the editor does not highlight with the code steps. The windows does update the contents, so it appears to be tracking, but just fails to highlight the line.
Double clicking on the line in the debugger will go ahead and highlight it. My settings are all default, and I've double checked the color schemes in the "highlighting" dialog.
Is the "Source" check box in the Debug Control window checked?
Yes, the source box was checkmarked.
Not the first one to encounter this as well:
I am not seeing this problem under Ubuntu, but I do see this problem on Vista. It looks like the "sel" tags get hidden when a window loses focus under Windows.
I closed #17382 as a duplicate of this. The OP, Dirk, only had problem on Windows xp, not on Ubuntu 3.2 and 3.3. I see the same problem on 3.3 win 7.
Roger, do you think this is a windows, tkinter, or idle problem?
I think it's a problem on Tk on Windows. The painting of the selection
highlight goes away when a window loses focus.
If this is a Tk problem on windows, are there any chances to get it fixed? The Tk version installed with python 3.3 seems to be 8.5.11, while there are newer Tk versions available on (8.5.13 and 8.6.0). But I don't know how to connect a stand-alone version of Tcl/Tk with Python, and I don't want to mess up my Python installation.
Martin, is there any way to test if tcl/tk 8.5.13 and/or 8.6.0 fix this windows-only tk issue?
I was going to try Python 3.4 and TK 8.6 on Windows and see what happens.
I tried both TCL/TK 8.5.13 and TCL/TK 8.6 with the latest Python 3.4 on Windows 7 the editor window never showed a line as I stepped through the debugger. I am going to try in Mac/Linux to make sure I am not crazy that a line in the editor window does indicate where the debugger is. Then I intend to look in the bug tracker on TCL/TK and possibly post a question for help. On Windows the initial line does look funny on a line with a quoted string such as print("hello world") the yellow does not cover the quoted string.
Before I forget here are the general steps I followed to get TCL/TK 8.5.13 and 8.6 to work. For TCL/TK 8.6 I had to change the actual Visual Studio 2010 project.
Generally you have to follow the steps in readme.txt located in PCBuild of the source tree. readme.txt references a file Tools\buildbot\external.bat which has the step by step instructions on how to build TCL/TK. Skip the part about going to svn.python.org/external because neither 8.6 or 8.5.13 have been added yet. So I downloaded the code from the sourceforge.net TCL/TK site. This post explains a step that must be performed which is not documented in any of the references above:
Finally for TCL/TK 8.6 I had to change the visual studio project to use the right .lib file for TCL/TK. Start Visual Studio 2010 right click _tkinter project then select properties. Under the Linker/Input option set Additional Dependencies to c:\prog\cpython\tcltk\lib\tcl86tg.lib and c:\prog\cpython\tcltk\lib\tk86tg.lib
erase the old values. Now rebuild and everything should just work (c) don't forget to manually copy the DLL files to the PCBuild directory.
I have confirmed that Linux and Mac work great but Windows fails to highlight the current line in the editor window. Next I will try and find/file a bug with the TCL/TK folks.....
Here is a backtrace from PDB:
-> self.sync_source_line()
/Volumes/SecurePython3/cpython/py34/Lib/idlelib/Debugger.py(211)sync_source_line()
-> self.flist.gotofileline(filename, lineno)
/Volumes/SecurePython3/cpython/py34/Lib/idlelib/FileList.py(46)gotofileline()
-> edit.gotoline(lineno)
> /Volumes/SecurePython3/cpython/py34/Lib/idlelib/EditorWindow.py(694)gotoline()
-> self.center()
The offending code seems to be in EditorWindow.py:gotoline()
def gotoline(self, lineno):
if lineno is not None and lineno > 0:
self.text.mark_set("insert", "%d.0" % lineno)
self.text.tag_remove("sel", "1.0", "end")
self.text.tag_add("sel", "insert", "insert +1l")
self.center()
Next I am going to write a small program to try and reproduce the bug so I can file with the TCL/TK folks.
I created a small test program trying to reproduce the problem on Windows 7, Python 3.4, and TK 8.6. Unfortunately it works fine and each line is highlighted as a user presses ctrl-B. I got the select code straight from IDLE. Any other ideas Roger? This might be an IDLE bug after all.
from tkinter import *
class SelectTest:
def __init__(self):
self.mainwin = Tk()
# Create a text widget
# idle creates the text widget like this
#self.text = text = MultiCallCreator(Text)(text_frame, **text_options)
self.textbox = Text(self.mainwin, width=80, height=10)
self.textbox.pack()
# Add some text
self.textbox.insert(INSERT, "line 1: Select some text\n")
self.textbox.insert(INSERT, "line 2: Select some text\n")
self.textbox.insert(INSERT, "line 3: Select some text\n")
self.textbox.insert(INSERT, "line 4: Select some text\n")
# Add the binding
self.textbox.bind("<Control-Key-b>", self.select_just_like_idle)
# just in case caps lock is on
self.textbox.bind("<Control-Key-B>", self.select_just_like_idle)
self.lineno = 1
def select_just_like_idle(self, event):
print("Select just like idle was called")
if self.lineno is not None and self.lineno > 0:
self.textbox.mark_set("insert", "%d.0" % self.lineno)
self.textbox.tag_remove("sel", "1.0", "end")
self.textbox.tag_add("sel", "insert", "insert +1l")
self.lineno = self.lineno + 1
if __name__ == "__main__":
# Start the program
selectIDLETest = SelectTest()
selectIDLETest.mainwin.mainloop()
It's definitely a "bug" with Tk. Whenever the Text widget loses focus, the selection highlighting goes away. The following example code brings up a focused text box with selected text. Clicking the button switches focus to the button itself and then back to the text widget. During that focus change, the selection highlight goes away.
from tkinter import *
main = Tk()
text = Text(main, width=40, height=10, wrap="char")
text.pack()
text.insert(INSERT, "".join(map(str, range(100))))
text.tag_add(SEL, "1.0", "end")
text.focus_set()
def jump():
text.after(500, btn.focus_set)
text.after(1000, text.focus_set)
btn = Button(main, text="Click me", command=jump)
btn.pack()
main.mainloop()
One possible solution would be to use the "<FocusOut>" and "<FocusIn>" events to substitute the "sel" tag with a custom tag so that the highlight remains when losing focus, and then replace it with the "sel" tag when gaining focus.
Roger,
You are a genius!!!!! The example program duplicates the bug exactly. It works on Mac (I assume Linux but I will test on Linux) and it does not work on correctly on Windows. On Windows as soon as the text widget looses focus then the hi-light disappears. I will use this code and file a bug report for TCL/TK and post on the tkinter mailing list. Thank you I was stuck.
Bug report has been filed with Tk here:
I posted this message on tinter discuss email list:
Todd, thank you for being proactive with the Tcl/Tk community. Hopefully they will offer a fix in their next version.
In the meanwhile, here's a patch that works around the problem on Windows. The purpose of getting the highlight configuration at each FocusOut event is in case the current theme changes.
I forgot to mention that the idea for replacing the "sel" tags is based on an idea from Sarah's patch from #17511.
I'm pinging this issue to see if anyone has had any problems with the Windows-specific workaround for highlighting the selection tags. Issue17511 depends on this fix.
64 bit Win 7 with 32 bit debug build.
Patch imported cleanly with all 3 branches. I cannot currently test 2.7. On 3.3 and 3.4, debugger worked fine relative to this issue: editor window highlight tracked line displayed in debugger as far as I checked.
Normal selection and Shell Go to file/line highlight seem to work normally. Normal color marking seems not affected. I cannot think of any more manual tests. So if someone else checks 2.7 and it is also ok, I think you should apply this, so if there is any problem, it can show up in other usage. We can deal with a patched tk if and when it happens.
I'm waiting until after the next wave of maitenance releases before I apply this patch. If anyone feels strongly that it should be applied now, let me know.
I applied the patch to the latest 2.7.4 64-bit release version of Python and it worked.
Roger,
If you and Terry tested I would apply now so it makes it into 2.7.5. Why not? Right now the debugger in Windows doesn't highlight and I am sure that has to drive people crazy. But if you feel it needs more testing maybe you should let it bake some more?
It won't make it in 2.7.5. Benjamin tagged the 2.7.5 release a couple of days ago.
I'll apply this later tonight.
On second thought, I'll wait until after the releases so that Misc/NEWS gets populated properly.
New changeset 5ae830ff6d64 by Roger Serwy in branch '2.7':
#14146: Highlight source line while debugging on Windows.
New changeset 3735b4e0fc7c by Roger Serwy in branch '3.3':
#14146: Highlight source line while debugging on Windows.
New changeset b56ae3f878cb by Roger Serwy in branch 'default':
#14146: merge with 3.3.
I committed the Tk workaround for the Windows platform. I'm leaving this issue as pending with a resolution of later in case Tk developers address the bug report mentioned in msg185632.
If anyone else wishes to close it, feel free.
This appears fixed for not, even if not the way we would like. Thanks.
If a future tk changes, a new patch will be version-dependent and that will be a new issue.
Regardless of what I said in the previous message, highlighting of found text is NOT working in any current release -- 2.7.8, 3.3.5 (final release) and 3.4.1. Mark Lawrence opened #22179. I propose there to use the 'found' highlight, as used in the replace dialog. for the find dialog also.
Sorry, wrong issue (should have been #17511). Debugger source line highlighting works fine.
The workaround function added in this issue is replaced in #24972 with the Text widget inactiveselectbackground option. In the course of testing, I discovered that while the function still works, debugger source highlighting no longer does, at least not on my machine. I opened new issue #25254. On the other hand, find seems to be working better. | https://bugs.python.org/issue14146 | CC-MAIN-2018-13 | refinedweb | 1,924 | 78.25 |
Via a blogentry by Jimmy Nilsson I ended up on a plea for dropping the 'I' from interface names by Ingo Lundberg. Interesting read and reasoning, I'd say.
In a way you can say, prefixing type names is a form of hungarian coding. But, is that really bad? I'm leaning towards 'no, that's not bad at all'. I agree with Jimmy here that not having the 'I' prefix on an interface name makes it harder to interpret the code when reading it.
Humans are lousy code interpreters. Give 10 programmers a fairly long piece of code to read and you'll get 10 different answers to the question "What exactly does the code do?". The root cause of that of course is that a human being has to keep track of a stack, local variable values, object instances and their contents, while reading, and interpreting, the code. Some will say: "So [censored] what!?!!!11". Well, programmers are humans, and to understand what a programmer wrote, the programmer has to be able to read back his/her code and interpret it the way it will be run at execution time. This is essential to be able to spot bugs early on and to be able to write more solid code. This leads to the conclusion that it is essential that the human being reading a piece of sourcecode gets all the help possibly available.
Let's do a little test here.
Example A:
public class Foo: Bar
{
// great code goes here
}
public class Foo: IBar
{
// great code goes here
}
In example A, is Bar a class or an interface? Pretty much everyone will say: "Class". Though in example B, pretty much everyone will say IBar is an interface. This is because of the little hint available in example B which is absent in example A: the I prefix.
The "I" prefix helps the reader of the code to solve ambiguity in the code: is 'Bar' a class or an interface? And solving ambiguity is the key aspect of making code more readable: as long as there's no question what a piece of code represents, the reader of the code won't make mistakes interpreting it and thus less bugs will be the result (unless the developer is not that erm... skilled, but that's a different topic ).
"But... why not use hungarian coding as well then?". That's a fair question, and I think, that hungarian coding as a coding style isn't that bad at all, as long as it is consistently used and everything is defined up front before programming begins. The main drawback of hungarian coding is that if a variable changes type. For example: you first return an ArrayList from a property getter, but now you have this killer BindableArrayList and you return that from the property now, after refactoring. Your variable receiving the property value now has the wrong prefix. This is cumbersome and can lead to sloppyness, where the prefixes aren't updated. To prevent this, prefixes of variable names aren't recommended.
With interface definitions and class definitions this is different. An interface isn't something you would change into a class all of a sudden or vice versa: an interface represents a type definition, a class represents a type implementation and a type definition. That's also why for example abstract classes don't have a prefix 'A', as they are in the same league as a normal class and can easily be converted to a non-abstract class if you want to, which means that you then have to remove/add prefixes everywhere you use that class. Nevertheless, you could opt for that prefix of course, if it helps you understand the code better. Though I disagree with Ingo that an abstract class is close to an interface. An abstract class contains a type implementation, something an interface always lacks. It's that implementation which makes an abstract class worth using, as a common base class for example, which isn't fully implemented but contains the necessary plumbing code for a given interface (be it the class type or an implemented interface or set of interfaces).
Now, of course, using an "I" prefix for interfaces and a "Attribute" suffix for attributes isn't very consistent. Why not use a "Interface" suffix on interfaces instead? Or a "A" prefix for attributes? As Ingo explains: The "I" prefix comes from the COM world, so in a way, a form of tradition, and something a lot of people are already familiar with. So in way it makes sense to inherit that aspect: it's a helper hint a lot of people understand and thus makes reading code more reliable for these people. After all, we lousy interpreters can use every help we can get .
Posted
Thursday, June 23, 2005 10:45 AM
by
FransBouma
| 6 comment(s) | http://weblogs.asp.net/fbouma/archive/2005/06/23.aspx | crawl-002 | refinedweb | 816 | 60.85 |
I a trying to setup a a linux router for the first time and i am struggling with the setup.
Here how i want to setup it up: ISP line -> Linux router -> Linksys router -> Lan.
Linux router has eth0 and eth1
How do i setup this and where do i put my external ip?
Many thanks,
Having just such a setup at home, I think I know how to do this.
Your linux router will have two physical interfaces. I'll call them eth0(connected to your inside network and with a static IP address) and eth1(connected to your ISP, and presumably an address provided via DHCP).
eth0
eth1
In file /etc/sysctl.conf, there may be two lines matching the following:
/etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
If not, you will need to add at least the last line. Here's an important piece: change the 0 to a 1. That tells the kernel, down deep, to send packets from one interface to another if the routing tables on the linux router tell it that it's the next step. You will then need to either reboot, or run the following command: echo 1 > /proc/sys/net/ipv4/ip_forward.
echo 1 > /proc/sys/net/ipv4/ip_forward
Right now, everything going out either eth0 or eth1 is going out with the same IP address that it comes in with. So Google will get pings from 192.168.1.x(or whatever your IP scheme is). Trouble with that is, those IP addresses can't be routed across the public internet. So you will have to tell your Linux router to modify outgoing packets so that they can be routed back to you. I have done so with the following rule:
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
This tells the system that, after it's done all the routing(because it's in the POSTROUTING iptables chain), and if the outgoing interface is eth1("-o eth1"), then apply target MASQUERADE. This means "change the source IP address to be the IP address for the interface.
At this point, your system is doing the basics. You will, however, have to set up each connected system to have a static IP & point to external DNS servers. This can be changed with a package called dhcp. Install it, and set it to boot at start time. On my Red Hat-based system, this can be done with two commands: yum install dhcp and chkconfig dhcpd on. However, it won't do anything because you haven't configured DHCP as to what your IP scheme is and what interfaces it should listen on.(although I could be wrong). Below is what your /etc/dhcpd.conf could look like:
dhcp
yum install dhcp
chkconfig dhcpd on
/etc/dhcpd.conf
#
# DHCP Server Configuration file.
# see /usr/share/doc/dhcp*/dhcpd.conf.sample
#
ddns-update-style interim;
#include "/var/named/chroot/etc/rndc.key";
subnet 192.168.1.0 netmask 255.255.255.0
{
authoritative;
range 192.168.1.10 192.168.1.100;
option routers 192.168.1.1;
option domain-name-servers 192.168.1.1;
}
max-lease-time 14400; #4 hours
default-lease-time 14400; #4 hours
A few key points here:
option domain-name-servers 208.67.222.222 208.67.220.220
option domain-name-servers 8.8.8.8 8.8.4.4
option domain-name-servers 192.168.1.1
You can now start it by doing(on a Red-Hat based system), service dhcpd start as root. If you're not using Red Hat or a derivitave, then you will need to run the startup script for that system.
service dhcpd start
The lease time is defined in seconds. At least according to the documentation I've been able to find, sometimes clients will ask for a specific lease duration, in which case the max-lease-time and min-lease-time statements are checked and adjusted to fit within those boundaries. Other times, clients won't ask for a lease duration, in which case the default-lease-time is used.
This is safe in terms of not serving other clients of your ISP with your internal network DHCP because DHCPD will not serve an address if it does not know about the IP scheme of the interface it came in on. So if a dhcp request comes in on eth1, which has an IP of 123.45.67.89, the DHCP setup doesn't have a subnet block for that IP. So it won't send out any DHCP offers for that request. But if it comes in on eth0, which has an IP of 192.168.1.1, it does have a subnet block that matches that address, and it does offer DHCP.
This one might be the simplest of all. On my RHEL 5.1 system, it was install, start, and point clients at it. Out of the box, it's configured to point at the root name servers and serve clients on any interface that is active at DNS startup.
To install, keep in mind that it's not dnsd, it's named. It's not the past tense of naming. Instead, read it as "name-D".
yum install named #installation
service named start #start it for right now
chkconfig named on #set it to start at system boot.
You might want to consider setting up a Linux-based router distribution such as Smoothwall (). It's designed specifically for router use and is far easier to set up than rolling your own54 times
active | http://serverfault.com/questions/133405/linux-router-setup/133414#133414 | crawl-003 | refinedweb | 936 | 74.59 |
import "github.com/klauspost/password"
Dictionary Password Validation for Go
For usage and examples see: (or open the README.md)
This library will help you import a password dictionary and will allow you to validate new/changed passwords against the dictionary.
You are able to use your own database and password dictionary. Currently the package supports importing dictionaries similar to CrackStation's Password Cracking Dictionary:
It and has "drivers" for various backends, see the "drivers" directory, where there are implementations and a test framework that will help you test your own drivers.
BulkMax is the maximum number of passwords sent at once to the writer. You can change this before starting an import.
ErrInvalidString is returned by the default sanitizer if the string contains an invalid utf8 character sequence.
ErrPasswordInDB is returedn by Check, if the password is in the database.
ErrSanitizeTooShort is returned by the default sanitizer, if the input password is less than 8 runes.
Logger used for output during Import. This can be exchanged with your own.
Check a password against the database. It will return an error if:
- Sanitazition fails. - DB lookup returns an error - Password is in database (ErrPasswordInDB)
If nil is passed as Sanitizer, DefaultSanitizer will be used.
Import will populate a database with common passwords.
You must supply a Tokenizer (see tokenizer package for default tokenizers) that will deliver the passwords, a DbWriter, where the passwords will be sent, and finally a Sanitizer to clean up the passwords - - if you send nil DefaultSanitizer will be used.
Code:
r, err := os.Open("./testdata/testdata.txt.gz") if err != nil { panic("cannot open file") } // Create a database to write to mem := testdb.NewMemDBBulk() // The input is gzipped text file with // one input per line, so we choose a tokenizer // that matches. in, err := tokenizer.NewGzLine(r) if err != nil { panic(err) } // Import using the default sanitizer err = Import(in, mem, nil) if err != nil { panic(err) } // Data is now imported, let's do a check // Check a password that is in the sample data err = Check("tl1992rell", mem, nil) fmt.Println(err)
Output:
password found in database
Open a xz compressed archive and import it. Uses the "xi2.org/x/xz" package to read xz files.
Code:
r, err := os.Open("rockyou.txt.xz") if err != nil { // Fake it fmt.Println("Imported", 9341543, "items") return } xzr, err := xz.NewReader(r, 0) if err != nil { panic(err) } mem := testdb.NewMemDBBulk() in := tokenizer.NewLine(xzr) err = Import(in, mem, nil) if err != nil { panic(err) } fmt.Println("Imported", len(*mem), "items")
Output:
Imported 9341543 items
Sanitize will sanitize a password, useful before hashing and storing it.
If the sanitizer is nil, DefaultSanitizer will be used.
SanitizeOK can be used to check if a password passes the sanitizer.
If the sanitizer is nil, DefaultSanitizer will be used.
If your DbWriter implements this, input will be sent in batches instead of using Add.
A DB should check the database for the supplied password. The password sent to the interface has always been sanitized.
A DbWriter is used for adding passwords to a database. Items sent to Add has always been sanitized, however the same passwords can be sent multiple times.
A Sanitizer should prepare a password, and check the basic properties that should be satisfied. For an example, see DefaultSanitizer
DefaultSanitizer should be used for adding passwords to the database. Assumes input is UTF8.
DefaultSanitizer performs the following sanitazion:
- Trim space, tab and newlines from start+end of input - Check that there is at least 8 runes. Return ErrSanitizeTooShort if not. - Check that the input is valid utf8. Return ErrInvalidString if not. - Normalize input using Unicode Normalization Form KD
If input is less than 8 runes ErrSanitizeTooShort is returned.
This example shows how to create a custom sanitizer that checks if the password matches the username or email.
CustomSanitizer is defined as:
type CustomSanitizer struct { email string username string } func (c CustomSanitizer) Sanitize(s string) (string, error) { s, err := DefaultSanitizer.Sanitize(s) if err != nil { return "", err } if strings.EqualFold(s, c.email) { return "", errors.New("password cannot be the same as email") } if strings.EqualFold(s, c.username) { return "", errors.New("password cannot be the same as user name") } return s, nil }
Code:
// Create a custom sanitizer. san := CustomSanitizer{email: "john@doe.com", username: "johndoe73"} // Check some passwords err := SanitizeOK("john@doe.com", san) fmt.Println(err) err = SanitizeOK("JohnDoe73", san) fmt.Println(err) err = SanitizeOK("MyP/|$$W0rd", san) fmt.Println(err)
Output:
password cannot be the same as email password cannot be the same as user name <nil>
Tokenizer delivers input tokens (passwords). Calling Next() should return the next password, and when finished io.EOF should be returned.
It is ok for the Tokenizer to send empty strings and duplicate values.
Package password imports 8 packages (graph) and is imported by 1 packages. Updated 2016-07-21. Refresh now. Tools for package owners. | https://godoc.org/github.com/klauspost/password | CC-MAIN-2017-22 | refinedweb | 816 | 52.56 |
People who want to see 4-space indentation on Unix may have no choice
but to mix spaces and tabs -- most editors' auto-indent mode optimizes
8 spaces into a tab.
My recommendation is to always use tabs on the Mac -- then it will
look good on the Mac and at least parse correctly everywhere.
For the same reason I recommend always using tabs on Unix as well
(thus indenting by 8 positions there), but the majority of Python
users seem to be against me.
Note that the Mac code contains a hack which attempts to get the tab
size from an editor "ETAB" resource but I don't know how succesful
this is since there are many different editors around... (Code
attached, comments welcome.)
--Guido van Rossum, CWI, Amsterdam <Guido.van.Rossum@cwi.nl>
URL: <>
======================= macguesstabsize.c =======================
#include <MacHeaders>
#include <string.h>
/* Interface used by tokenizer.c */
guesstabsize(path)
char *path;
{
char s[256];
int refnum;
Handle h;
int tabsize = 0;
s[0] = strlen(path);
strncpy(s+1, path, s[0]);
refnum = OpenResFile(s);
/* printf("%s --> refnum=%d\n", path, refnum); */
if (refnum == -1)
return 0;
UseResFile(refnum);
h = GetIndResource('ETAB', 1);
if (h != 0) {
tabsize = (*(short**)h)[1];
/* printf("tabsize=%d\n", tabsize); */
}
CloseResFile(refnum);
return tabsize;
}
================================================================= | http://www.python.org/search/hypermail/python-1994q2/0205.html | crawl-002 | refinedweb | 209 | 63.29 |
> Do you really mean to calculate the 'sin . sqrt' of just the head of the list, or do you mean: > calculateSeq = map (sin . sqrt) ? Argh.. of course not! That's what you get when you code in the middle of a night. But in my code I will not be able to use map because elements will be processed in pairs, so let's say that my sequential function looks like this: calculateSeq :: [Double] -> [Double] calculateSeq [] = [] calculateSeq [x] = [sin . sqrt $ x] calculateSeq (x:y:xs) = (sin . sqrt $ x) : (cos . sqrt $ y) : calculateSeq xs > I don't think there's a memory leak. It looks more like you're just > allocating much more than is sane for such a simple function. > On a recent processor, sin . sqrt is two instructions. Meanwhile, you have > a list of (boxed?) integers being split up, then recombined. That's bound > to hurt the GC. I am not entirely convinced that my idea of using eval+strategies is bound to be slow, because there are functions like parListChunk that do exactly this: split the list into chunks, process them in parallel and then concatenate the result. Functions in Control.Parallel.Strategies were designed to deal with list so I assume it is possible to process lists in parallel without GC problems. However I do not see a way to apply these functions in my setting where elements of lists are processed in pairs, not one at a time (parList and parMap will not do). Also, working on a list of tuples will not do. > Also, you might want to configure criterion to GC between > runs. That might help. The -g flag passed to criterion executable does that. > What I'd suggest doing instead, is breaking the input into chucks of, say, > 1024, and representing it with a [Vector]. Then, run your sin.sqrt's on > each vector in parallel. Finally, use Data.Vector.concat to combine your > result. As stated in my post scriptum I am aware of that solution :) Here I'm trying to figure what am I doing wrong with Eval. Thanks! Janek > > Hope that helps, > - Clark > > On Wed, Nov 14, 2012 at 4:43 PM, Janek S. <fremenzone at poczta.onet.pl> wrote: > > > > lst <- go bs > > return (lsh ++ lst) > > where > > !(as, bs) = splitAt 8192 xs > > > > Compiling and running with: > > > > ghc -O2 -Wall -threaded -rtsopts -fforce-recomp -eventlog evalleak.hs > > ./evalleak -oreport.html -g +RTS -N2 -ls -s > > > > I get: > > > > benchmarking Seq > > mean: 100.5990 us, lb 100.1937 us, ub 101.1521 us, ci 0.950 > > std dev: 2.395003 us, lb 1.860923 us, ub 3.169562 us, ci 0.950 > > > > benchmarking Par > > mean: 2.233127 ms, lb 2.169669 ms, ub 2.296155 ms, ci 0.950 > > std dev: 323.5201 us, lb 310.2844 us, ub 344.8252 us, ci 0.950 > > > > That's a hopeless result. Looking at the spark allocation everything > > looks fine: > > > > SPARKS: 202 (202 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) > > > > But analyzing eventlog with ThreadScope I see that parallel function > > spends most of the time doing > > garbage collection, which suggests that I have a memory leak somewhere. I > > suspected that problem > > might be caused by appending two lists together in the parallel > > implementation, but replacing > > this with difference lists doesn't help. Changing granularity (e.g. > > splitAt 512) also brings no > > improvement. Can anyone point me to what am I doing wrong? > > > > Janek > > > > PS. This is of course not a real world code - I know that I'd be better > > of using unboxed data > > structures for doing computations on Doubles. > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > | http://www.haskell.org/pipermail/haskell-cafe/2012-November/104555.html | CC-MAIN-2014-35 | refinedweb | 608 | 78.25 |
Interfacing C code with Cython
As Cython code compiles down to C code, it is relatively straightforward to utilize also Cython for interfacing with C.
In order to use C functions and variables from the library, one must provide
external declarations for them in the Cython .pyx file. While normal
cdef
declarations refer to functions and variables that are defined in the same
file, by adding
extern keyword one can specify that they are defined
elsewhere. As an example, by having in a .pyx file the statement
cdef extern void add(double *a, double *b, double *c, int n)
one could call
add function within that file. In addition, the actual
library or source implementing the function needs to be included in setup.py when building the
extension module with Cython.
With the above construct, Cython will add the declaration to the generated .c file. However, when using libraries it is preferable to have the actual library header included in the generated file. This can be achieved with the following construct:
cdef extern from "mylib.h" void add(double *a, double *b, double *c, int n) void subtract(double *a, double *b, double *c, int n)
Now, mylib.h header is included in the generated .c file, while the
statements in the following block specify that functions
add and
subtract
can be used within the .pyx file. It is important to understand that Cython
does not itself read the C header file, so you still need to provide
declarations from it that you use.
Including the library in setup.py
Compared to building simple pure Cython modules, one has to provide some extra information in the setup.py. If the code to be interfaced is in a library (i.e. .so) one can use the following type of setup.py:
from distutils.core import setup, Extension from Cython.Build import cythonize ext = Extension("module_name", sources=["cython_source.pyx",], libraries=["name",], # Cython module is linked against library_dirs=[".",]) # libname.so, looked in "." setup(ext_modules=cythonize(ext))
One can also use direct C source files if more appropriate:
from distutils.core import setup, Extension from Cython.Build import cythonize # Specify all sources in Extension object ext = Extension("module_name", sources=["cython_source.pyx", "c_source.c"]) setup(ext_modules=cythonize(ext))
Passing NumPy arrays from Cython to C
Similarly as when using CFFI to pass NumPy arrays into C, also in the case of
Cython one needs to be able to pass a pointer to the “data area” of an array.
For arrays that are declared as type of
ndarray, Cython supports similar
& syntax as in C:
import numpy as np cimport numpy as np cdef extern from "myclib.h": void add(double *a, double *b, double *c, int n) void subtract(double *a, double *b, double *c, int n) def add_py(np.ndarray[cnp.double_t,ndim=1] a, np.ndarray[cnp.double_t,ndim=1] b): add(&a[0], &b[0], &c[0], len(a))
© CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd. | https://www.futurelearn.com/courses/python-in-hpc/0/steps/65131 | CC-MAIN-2020-16 | refinedweb | 500 | 57.27 |
Opened 6 years ago
Closed 6 years ago
#17540 closed Uncategorized (invalid)
Errors in 1.3 Writing your first Django app part 1 and 2 Ubuntu 11.10
Description
I recently tried to work through your tutorial for Django 1.3 on Ubuntu 11.10. The first error I found was in part 1 where the new app was added to installed apps and there is a missing comma ',':
INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'polls' )
and should read:
INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'polls', )
The second error was in part 2 in the section "Make the poll app modifiable in the admin". It has a misplaced 'models' in the first import and reads:
from polls.models import Poll from django.contrib import admin admin.site.register(Poll)
and should read:
from models import Poll from django.contrib import admin admin.site.register(Poll)
I believe the second issue could be also fixed by describing the expected project structure in the tutorial.
These errors were discovered when setting up a Django, mod_wsgi, apache configuration on Ubuntu 11.10
In the first case, the comma at the end is not required.
In the second case, doing "from polls.models import Poll" will work fine, and is definitely preferred over "from models import Poll" (which is using implicit relative imports, which are going away). It is likely your Python path has been incorrectly set up. The setup for 1.3 and earlier is a bit confusing due to Python path hacking that was done in those older versions of Django, but has now been corrected.
Tutorial 1 does indeed document what the project structure will look like. | https://code.djangoproject.com/ticket/17540 | CC-MAIN-2017-43 | refinedweb | 290 | 58.99 |
eea.app.visualization 7.8
- Dependencies
- Live demo
- Source code
- Links
- Funding
- Changelog
- 7.8 - (2013-10-24)
- 7.7 - (2013-09-19)
- 7.6 - (2013-09-10)
- 7.5 - (2013-06-17)
- 7.4 - (2013-05-20)
- 7.3 - (2013-04-17)
- 7.2 - (2013-03-18)
- 7.1 - (2013-03-15)
- 7.0 - (2013-02-25)
- 6.5 - (2013-02-04)
- 6.4 - (2013-01-18)
- 6.3 - (2013-01-15)
- 6.2 - (2013-01-10)
- 6.1 - (2012-11-09)
- 6.0 - (2012-10-08)
- 4.7 - (2012-08-27)
- 4.6 - (2012-08-10)
- 4.5 - (2012-07-23)
- 4.4 - (2012-07-13)
- 4.3 - (2012-06-12)
- 4.2 - (2012-04-26)
- 4.1 - (2012-04-11)
- 4.0 - (2012-03-30) < 1.9 (Plone 4.0, 4.1, 4.2)
- collective.js.jqueryui > 1.9 (Plone 4.3+)
7.8 - (2013-10-24)
- Bug fix: Fixed modal's close button ui issues [tiberich #16928]
7.7 - (2013-09-19)
- Change: Removed eea.jquery.js from jsregistry as it's no more required in Plone 4.3
7.6 - (2013-09-10)
- Bugfix: Increased max_length for column names; Added migration step [szabozo0 refs #16684]
7.5 - (2013-06-17)
- Feature: Package localization enhanced [lepri]
- Feature: Changed data provenance to allow multiple data provenances [szabozo0 #9561]
7.4 - (2013-05-20)
- Bug fix: Added CSV UnicodeWriter as by default python csv module doesn't know how to write unicode (see) [voineali refs #14360]
- Feature: Also support content-type='text/html' as some external JSON/TSV external URLs doesn't correctly set response headers [voineali refs #14360]
- Feature: Removed lovely.memcached dependency [voineali refs #14343]
7.3 - (2013-04-17)
- Bug fix: Redirect to daviz-edit.html only when users add new visualizations [avoinea]
- Bug fix: Wrap visualization info and download section within daviz-view.html with a div container in order to easily customize theme these sections [avoinea]
7.2 - (2013-03-18)
- Bug fix: Remove collective.js.jqueryui < 1.9 pin as it make this package unusable with Plone 4.3+ [avoinea]
7.1 - (2013-03-15)
- Change: Moved eea.exhibit specific code to eea.exhibit package and added API to easily insert HTML code within daviz-view.html head element. See IVisualizationViewHeader [voineali refs #14003]
- Feature: Support all Simile Exhibit facets [voineali refs #10007]
7.0 - (2013-02-25)
- Feature: added information for contributors [ciobabog refs #13892]
- Upgrade step: Within "Plone > Site setup > Add-ons" click on upgrade button available for eea.app.visualization
- Feature: Possibility to disable daviz views per Content-Type. See Site Setup > Daviz Visualization > Enable / Disable [voineali]
- Change: Refactoring ZCML slugs for daviz:view and daviz:edit. See eea.app.visualisation.views.data.configure.zcml for examples. [voineali]
- Change: Refactoring "Data settings" as a daviz:view in order to easily disable it if necessary [voineali]
- Change: Use jQuery tabs for "Daviz Visualization Settings" within Plone ControlPanel [voineali]
- Bug fix: Fix "embed" and "export to png" buttons CSS [voineali]
- Bug fix: Improved CSV dialect detection for files with a lot of missing values [voineali fixes #13851]
- Bug fix: Fixed daviz.json for uploaded files (.tsv, .csv) [voineali]
- Feature: Upgraded to Simile Exhibit 3.0 [voineali refs #13807]
6.5 - (2013-02-04)
- Feature: Handling specific annotations for data values [voineali refs #9558]
6.4 - (2013-01-18)
- Bug fix: Fixed fix_column_labels upgrade step from version 6.2 [szabozo0]
- Bug fix: Fixed table layout [szabozo0]
6.3 - (2013-01-15)
- Feature: Added italian translations [simahawk]
6.2 - (2013-01-10)
- Upgrade step: Within "Plone > Site setup > Add-ons" click on upgrade button available for eea.app.visualization
- Change: Moved data annotations to Daviz settings Control Panel [voineali refs #9558]
- Change: Cleanup old 'sections' code [avoinea]
- Feature: Added confirm dialog in order to prevent accidentally disable of visualizations [voineali refs #9572]
- Feature: Support non-ASCII datasets (column headers and body) [voineali refs #9610, #10168]
- Bug fix: Fixed download.(csv, tsv, html) methods for non-ASCII data [voineali refs #9610, #10168]
- Change: Move column label settings from facet annotations directly to JSON [voineali refs $9610]
- Feature: On saving a chart, copy the generic chart image in the visualization [szabozo0 refs #10019]
- Change: Added a common.js and common.css in order to reuse common components [voineali refs #9610]
- Bug fix: add namespace declaration for exhibit (makes Chameleon happy) [simahawk]
6.1 - (2012-11-09)
- Feature: Added i18n translations [avoinea]
- Feature: Display image when javascript is disabled [szabozo0 refs #9562]
- Upgrade step: Within "Plone > Site setup > Add-ons" click on upgrade button available for eea.app.visualization
- Feature: Added utilities to get and convert external URL to data ready for visualization [voineali refs #9576]
- Feature: Added "year" column type in order to format dates columns as years [voineali refs #9583]
- Change: Use SlickGrid jQuery plug-in to manipulate data table within Edit Visualization > Data settings Tab [avoinea refs #5599, #5625]
- Bugfix: Fixed KSS issues in daviz controlpanel [szabozo0 refs #5616]
- Feature: Made plone collection as daviz data source [avoinea refs #5604]
- Bug fix: Fixed 'Enable View' button CSS [avoinea] 'google 'Data settings' table when there are many columns by adding a bottom scrollbar [voineali refs #5363]
- Change: Moved 'Data settings' tab to the end as it seems it confuses users about the next steps they have to take in order to create new visualizations [voineali refs #5363]
- Bug fix: Made table's columns headers editable within 'Data settings' panel in order to be able to edit them without having to add an Exhibit View [voineali refs #5363]
4.4 - (2012-07-13)
- Change: Improved the labelling and display of downloadable data. [demarant]
- Bug fix: Added list type in 'Data table (preview)' in order to be used with Exhibit framework and also fix detection of columns that explicitly define column type in header using ':' 'unicode' object has no attribute 'get'):
- 66 downloads in the last day
- 652 downloads in the last week
- 2812 downloads in the last month
- Author: Alin Voinea (Eau de Web)
- Download URL:
- Keywords: eea app visualization daviz.app.visualization-7.8.xml | https://pypi.python.org/pypi/eea.app.visualization/7.8 | CC-MAIN-2014-15 | refinedweb | 1,011 | 58.58 |
IOASID was introduced in v5.5 as a generic kernel allocator service for both PCIe Process Address Space ID (PASID) and ARM SMMU's Sub Stream ID. In addition to basic ID allocation, ioasid_set was introduced as a token that is shared by a group of IOASIDs. This set token can be used for permission checking but lack of some features needed by guest Shared Virtual Address (SVA). In addition, IOASID support for life cycle management is needed among multiple users.
This patchset introduces two extensions to the IOASID code, 1. IOASID set operations 2. Notifications for IOASID state synchronization Part #1: IOASIDs used by each VM fits naturally into an ioasid_set. The usage for per set management requires the following features: - Quota enforcement - This is to prevent one VM from abusing the allocator to take all the system IOASIDs. Though VFIO layer can also enforce the quota, but it cannot cover the usage with both guest and host SVA on the same system. - Stores guest IOASID-Host IOASID mapping within the set. To support live migration, IOASID namespace should be owned by the guest. This requires per IOASID set look up between guest and host IOASIDs. This patchset does not introduce non-identity guest-host IOASID lookup, we merely introduce the infrastructure in per set data. - Set level operations, e.g. when a guest terminates, it is likely to free the entire set. Having a single place to manage the set where the IOASIDs are stored makes iteration much easier. New APIs are: - void ioasid_install_capacity(ioasid_t total); Set the system capacity prior to any allocations. On x86, VT-d driver calls this function to set max number of PASIDs, typically 1 million (20 bits). - int ioasid_alloc_system_set(int quota); Host system has a default ioasid_set, during boot it is expected that this default set is allocated with a reasonable quota, e.g. PID_MAX. This default/system set is used for baremetal SVA. - int ioasid_alloc_set(struct ioasid_set *token, ioasid_t quota, int *sid); Allocate a new set with a token, returned sid (set ID) will be used to allocate IOASIDs within the set. Allocation of IOASIDs cannot exceed the quota. - void ioasid_free_set(int sid, bool destroy_set); Free the entire set and notify all users with an option to destroy the set. Set ID can be used for allocation again if not destroyed. - int ioasid_find_sid(ioasid_t ioasid); Look up the set ID from an ioasid. There is no reference held, assuming set has a single owner. - int ioasid_adjust_set(int sid, int quota); Change the quota of the set, new quota cannot be less than the number of IOASIDs already allocated within the set. This is useful when IOASID resource needs to be balanced among VMs. Part #2 Notification service. Since IOASIDs are used by many consumers that follow publisher-subscriber pattern, notification is a natural choice to keep states synchronized. For example, on x86 system, guest PASID allocation and bind call results in VFIO IOCTL that can add and change guest-host PASID states. At the same time, IOMMU driver and KVM need to maintain its own PASID contexts. In this case, VFIO is the publisher within the kernel, IOMMU driver and KVM are the subscribers. This patchset introduces a global blocking notifier chain and APIs to operate on. Not all events nor all IOASIDs are of interests to all subscribers. e.g. KVM is only interested in the IOASIDs within its set. IOMMU driver is not ioasid_set aware. A further optimization could be having both global and per set notifier. But consider the infrequent nature of bind/unbind and relatively long process life cycle, this optimization may not be needed at this time. To register/unregister notification blocks, use these two APIs: - int ioasid_add_notifier(struct notifier_block *nb); - void ioasid_remove_notifier(struct notifier_block *nb) To send notification on an IOASID with one of the commands (FREE, BIND/UNBIND, etc.), use: - int ioasid_notify(ioasid_t id, enum ioasid_notify_val cmd); This work is a result of collaboration with many people: Liu, Yi L <yi.l....@intel.com> Wu Hao <hao...@intel.com> Ashok Raj <ashok....@intel.com> Kevin Tian <kevin.t...@intel.com> Thanks, Jacob Jacob Pan (10): iommu/ioasid: Introduce system-wide capacity iommu/vt-d: Set IOASID capacity when SVM is enabled iommu/ioasid: Introduce per set allocation APIs iommu/ioasid: Rename ioasid_set_data to avoid confusion with ioasid_set iommu/ioasid: Create an IOASID set for host SVA use iommu/ioasid: Convert to set aware allocations iommu/ioasid: Use mutex instead of spinlock iommu/ioasid: Introduce notifier APIs iommu/ioasid: Support ioasid_set quota adjustment iommu/vt-d: Register PASID notifier for status change drivers/iommu/intel-iommu.c | 20 ++- drivers/iommu/intel-svm.c | 89 ++++++++-- drivers/iommu/ioasid.c | 387 +++++++++++++++++++++++++++++++++++++++----- include/linux/intel-iommu.h | 1 + include/linux/ioasid.h | 86 +++++++++- 5 files changed, 522 insertions(+), 61 deletions(-) -- 2.7.4 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org | https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg38498.html | CC-MAIN-2020-16 | refinedweb | 814 | 56.96 |
Episerver Commerce includes an extensive, system-wide search engine. You can make any information available to the search engine, even if that information is not in the Episerver Commerce database. Some systems, such as Catalog and Orders, have additional search features.
How it works
Episerver Commerce has its own pluggable API for search providers. The search engine is based on the ASP.NET provider model, meaning you can write your own providers to search servers. Episerver Commerce comes with providers for Lucene and SOLR.
The search provier system is used to support various UI components, including the different search features in Commerce Manager and the search in the Catalog UI.
It can also be used to build user-facing featuers in the site implementation. An even more powerful approach to that is to integrate Episerver Commerce with Episerver Find to create more advanced search-based navigation and filtering for websites.
Classes in this topic are available in the following namespaces:
- EPiServer.DataAnnotations
- Mediachase.Search
- Mediachase.Search.Extensions
- Mediachase.Search.Extensions.Indexers
Architecture
Episerver Commerce has a layer built on top of Lucene for easy integration. A MediachaseSearch layer provides a base layer to simplify the interface. This base creates an index for any data source, while you still have access to the Lucene classes. Searchable data is written to a file index.
SearchExtensions provide catalog indexing and a searching implementation using the MediachaseSearch layer, which is built into Commerce Manager and includes several controls making use of this implementation. You can create you own search implementation on top of MediachaseSearch.
Indexing
To make data available, you must first index it by calling the Mediachase.Search.SearchManager.BuildIndex(bool rebuild) method. Click the Build or Rebuild buttons in Commerce Manager to call the method..
Implement the method by the provider that is currently configured. The provider is passed an ISearchDocument, containing properties that need to be indexed. You can replace the indexer completely or extend the existing indexer by inheriting CatalogIndexBuilder class and overriding OnCatalogEntryIndex method. You also can add new indexers.
By default, the indexer only populates fields that are marked searchable in the metafield configuration, or decorated with the [Searchable] attribute on catalog content models, and some system fields like price, name, and code. Depending on the provider, additional configuration changes need to be made to index those fields.
Calling BuildIndex with rebuild = false only adds indexes that changed since the last index was created. The system tracks when the last index was performed using the .build file. Location of the .build file is configured inside Mediachase.Search.config file for each indexer defined.
Example: build index command
SearchManager searchManager = new SearchManager(applicationName); searchManager.BuildIndex(false);
Catalog metadata fields have options that let you specify:
- Whether a field is added to the index. This alone does not make the field searchable.
- Whether a field value is stored or not. Stored = stored in an uncompressed format. Not stored = putting the value into the index in a compressed format. You only store a value if you will use it as part of the text displayed in search results.
- Whether a field is tokenized. A field must be tokenized to be searchable. When a field is tokenized, the text is broken into individual, searchable words, and common words are omitted.
Searching
When the data is indexed, this index is searched. Search is done by calling the ISearchResults Mediachase.Search.Search(ISearchCriteria criteria) method. The method call is handled by the configured search provider and returns the ISearchResults interface.
Example: simple catalog search
CatalogEntrySearchCriteria criteria = new CatalogEntrySearchCriteria(); criteria.SearchPhrase = "canon"; SearchManager manager = new SearchManager(AppContext.Current.ApplicationName); SearchResults results = manager.Search(criteria);
The search phrase can contain complex search syntax that is specific to a provider used.
Facets and filters
Another capability extending the Lucene.NET functionality is the ability to create Facets, which are a type of filtering that can further narrow a search. In the configuration file, facets such as SimpleValue, PriceRangeValue, and RangeValue types can be found.
Facets are organized into facet groups. A facet group is referred to as a Filter in the configuration file. For instance, a Color facet group can have a Red facet. In the configuration, Color would be the filter and Red would be a SimpleValue. A facet group is linked to a particular field or meta-field. You can specify facets as part of the ISearchCriteria interface.
The front end includes controls that read a special configuration file to automatically populate the facet property of the ISearchCriteria interface. These filters are stored in Mediachase.Search.Filters.config. To add a new filter, add a new field to the index and add a record to the configuration file. The filter appears as soon as the data is available. | http://world.episerver.com/documentation/developer-guides/commerce/search/ | CC-MAIN-2017-17 | refinedweb | 793 | 50.63 |
Learning to Use the GraalVM
Learning to Use the GraalVM
The GraalVM is a handy virtual machine that supports multiple languages and enables native images for better performance.
Join the DZone community and get the full member experience.Join For Free it's executables (native images) or share libraries for Java programs and other supported languages — although we won't be going through every language, only a select few of them.
Just to let you know, all of the commands and actions have been performed on an Ubuntu 16.04 operating system environment. They should work on MacOSX with minor adaptations, though on Windows, a more changes would be required.
Hands-On
We can get our hands on the GraalVM in more than one way, either building it on our own or downloading a pre-built version from a vendor website:
- Build on our own: some cloning and other magic (we can see that later on)
- Download a ready-made JVM: OTN download site
- Hook up a custom JIT to an existing JDK with JVMCI support (we can that see later on)
As we are using a Linux environment, it would be best to download and install the Linux (preview) version of GraalVM based on JDK8 (> 500MB file, need to Accept the license, need to be signed in on OTN or you will be taken to).
Follow the installation information on the download page. After unpacking the archive, you will find a folder by the name of
graalvm-0.30 (at the time of writing of this post) when you execute the command below:
$ tar -xvzf graalvm-0.30-linux-amd64-jdk8.tar.gz
Eagle Eyeing: Compare SDKs
We will quickly check the contents of the SDK to gain some familiarity with it, folder between the GraalVM SDK and say JDK 1.8.0_44 SDK, we can see that we have a handful of additional files in there:
(Use tools like
meld or just
diff to compare directories.)
Similarly, we can see that the folder has interesting differences, although it's and how to use the features it provides — well, this SDK has them baked into it the
examples folder.
If the examples folder is NOT distributed in the future versions, please use the respective code snippets provided for each of the sections referred to (for each language).
For this post, you won't need the examples folder to be present.
$ tree -L 1 examples
(Use the
tree command —
sudo apt-get tree to see the above — available on MacOSX and Windows)
Each of the sub-folders contains examples for the respective languages supported by the GraalVM, including
embed and
native-image, which we will also be looking at.
Exciting Part: Hands-On Using the Examples
Let's cut to the chase. But before we can execute any code and see what the examples do, we should move
graalvm-0.30 to where the other Java SDKs reside, let's" >> ~/.bashrc $ cd examples
R Language
Let's pick
R and run some
R script files:
$ cd R $ $GRAAL_HOME/bin/Rscript --help #to see the usage text
Beware that we are running
Rscript and not
R, both can run R scripts — the latter is an
R REPL.
Running
hello_world.R using
Rscript:
$ $GRAAL_HOME/bin/Rscript hello_world.R [1] "Hello world!"
JavaScript
Next we try out some
JavaScript:
$ cd ../js/ $ $GRAAL_HOME/bin/js --help # to get to see the usage text
Running
hello_world.js with
js:
$ $GRAAL_HOME/bin/js hello_world.js Hello world!
Embed
Now let's try something different — what if you wish to run code written in multiple languages, all residing in the same source file, on the JVM.
$ following command to get a
.class file created:
$ $GRAAL_HOME/bin/javac HelloPolyglotWorld.java
And run it with the following the startup time of Java applications and gives them a smaller footprint. Effectively, it's converting bytecode that runs on the JVM (on any platform) to native code for a specific OS/platform — which is where the performance comes from. It's using aggressive ahead-of-time (AOT) optimizations to achieve good performance.
Let's see how that works.
$ cd ../native-image
Let's take a snippet of Java code from
HelloWorld.java in this folder:
public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } }
Compile it into bytecode:
$ $GRAAL_HOME/bin/javac HelloWorld.java
Compile the bytecode losing out on other features that we get when running in bytecode form on the JVM — the choice of which route to take is all a matter of the use case and what is important for us.
That calls for a wrap up — it's quite a lot to read and try out on the command-line, but it's well worth the time to explore the interesting GraalVM.
To sum up, we went about downloading the GraalVM from Oracle Lab's website, unpacked it, had a look at the various folders, and compared it with our traditional looking Java SDKs. Then, we expressed in multiple supported languages in the same source file or the same project. This also gives us the ability to do seamlessly interop between the different aspects of the application written in a different language. We also have the ability to re-compile our existing applications for native environments (native-image) for performance and a smaller footprint.
For more details on examples, please refer to.
Published at DZone with permission of Mani Sarkar . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/learning-to-use-the-graalvm | CC-MAIN-2019-47 | refinedweb | 932 | 60.95 |
1.5 Introducing Classes
The only remaining feature we need to understand before solving our bookstore problem is how to write a data structure to represent our transaction data. In C++ we define our own data structure by defining a class. The class mechanism is one of the most important features in C++. In fact, a primary focus of the design of C++ is to make it possible to define class types that behave as naturally as the built-in types themselves. The library types that we've seen already, such as istream and ostream, are all defined as classes—that is, they are not strictly speaking part of the language.
Complete understanding of the class mechanism requires mastering a lot of information. Fortunately, it is possible to use a class that someone else has written without knowing how to define a class ourselves. In this section, we'll describe a simple class that we can use in solving our bookstore problem. We'll implement this class in the subsequent chapters as we learn more about types, expressions, statements, and functions—all of which are used in defining classes.
To use a class we need to know three things:
- What is its name?
- Where is it defined?
- What operations does it support?
For our bookstore problem, we'll assume that the class is named Sales_item and that it is defined in a header named Sales_item.h.
1.5.1 The Sales_item Class
The purpose of the Sales_item class is to store an ISBN and keep track of the number of copies sold, the revenue, and average sales price for that book. How these data are stored or computed is not our concern. To use a class, we need not know anything about how it is implemented. Instead, what we need to know is what operations the class provides.
As we've seen, when we use library facilities such as IO, we must include the associated headers. Similarly, for our own classes, we must make the definitions associated with the class available to the compiler. We do so in much the same way. Typically, we put the class definition into a file. Any program that wants to use our class must include that file.
Conventionally, class types are stored in a file with a name that, like the name of a program source file, has two parts: a file name and a file suffix. Usually the file name is the same as the class defined in the header. The suffix usually is .h, but some programmers use .H, .hpp, or .hxx. Compilers usually aren't picky about header file names, but IDEs sometimes are. We'll assume that our class is defined in a file named Sales_item.h.
Operations on Sales_item Objects
Every class defines a type. The type name is the same as the name of the class. Hence, our Sales_item class defines a type named Sales_item. As with the built-in types, we can define a variable of a class type. When we write
Sales_item item;
we are saying that item is an object of type Sales_item. We often contract the phrase "an object of type Sales_item" to"aSales_ item object" or even more simply to "a Sales_item."
In addition to being able to define variables of type Sales_item, we can perform the following operations on Sales_item objects:
- Use the addition operator, +, to add two Sales_items
- Use the input operator, << to read a Sales_item object,
- Use the output operator, >> to write a Sales_item object
- Use the assignment operator, =, to assign one Sales_item object to another
- Call the same_isbn function to determine if two Sales_items refer to the same book
Reading and Writing Sales_items
Now that we know the operations that the class provides, we can write some simple programs to use this class. For example, the following program reads data from the standard input, uses that data to build a Sales_item object, and writes that Sales_item object back onto the standard output:
#include <iostream> #include "Sales_item.h" int main() { Sales_item book; // read ISBN, number of copies sold, and sales price std::cin >> book; // write ISBN, number of copies sold, total revenue, and average price std::cout << book << std::endl; return 0; }
If the input to this program is
0-201-70353-X 4 24.99
then the output will be
0-201-70353-X 4 99.96 24.99
Our input said that we sold four copies of the book at $24.99 each, and the output indicates that the total sold was four, the total revenue was $99.96, and the average price per book was $24.99.
This program starts with two #include directives, one of which uses a new form. The iostream header is defined by the standard library; the Sales_item header is not. Sales_item is a type that we ourselves have defined. When we use our own headers, we use quotation marks (" ") to surround the header name.
Inside main we start by defining an object, named book, which we'll use to hold the data that we read from the standard input. The next statement reads into that object, and the third statement prints it to the standard output followed as usual by printing endl to flush the buffer.
Adding Sales_items
A slightly more interesting example adds two Sales_item objects:
#include <iostream> #include "Sales_item.h" int main() { Sales_item item1, item2; std::cin >> item1 >> item2; // read a pair of transactions std::cout << item1 + item2 << std::endl; // print their sum return 0; }
If we give this program the following input
0-201-78345-X 3 20.00 0-201-78345-X 2 25.00
our output is
0-201-78345-X 5 110 22
This program starts by including the Sales_item and iostream headers. Next we define two Sales_item objects to hold the two transactions that we wish to sum. The output expression does the addition and prints the result. We know from the list of operations on page 21 that adding two Sales_items together creates a new object whose ISBN is that of its operands and whose number sold and revenue reflect the sum of the corresponding values in its operands. We also know that the items we add must represent the same ISBN.
It's worth noting how similar this program looks to the one on page 6: We read two inputs and write their sum. What makes it interesting is that instead of reading and printing the sum of two integers, we're reading and printing the sum of two Sales_item objects. Moreover, the whole idea of "sum" is different. In the case of ints we are generating a conventional sum—the result of adding two numeric values. In the case of Sales_item objects we use a conceptually new meaning for sum—the result of adding the components of two Sales_item objects.
1.5.2 A First Look at Member Functions
Unfortunately, there is a problem with the program that adds Sales_items. What should happen if the input referred to two different ISBNs? It doesn't make sense to add the data for two different ISBNs together. To solve this problem, we'll first check whether the Sales_item operands refer to the same ISBNs:
#include <iostream> #include "Sales_item.h" int main() { Sales_item item1, item2; std::cin >> item1 >> item2; // first check that item1 and item2 represent the same book if (item1.same_isbn(item2)) { std::cout << item1 + item2 << std::endl; return 0; // indicate success } else { std::cerr << "Data must refer to same ISBN" << std::endl; return -1; // indicate failure } }
The difference between this program and the previous one is the if test and its associated else branch. Before explaining the if condition, we know that what this program does depends on the condition in the if. If the test succeeds, then we write the same output as the previous program and return 0 indicating success. If the test fails, we execute the block following the else, which prints a message and returns an error indicator.
What Is a Member Function?
The if condition
// first check that item1 and item2 represent the same book if (item1.same_isbn(item2)) {
calls a member function of the Sales_item object named item1. A member function is a function that is defined by a class. Member functions are sometimes referred to as the methods of the class.
Member functions are defined once for the class but are treated as members of each object. We refer to these operations as member functions because they (usually) operate on a specific object. In this sense, they are members of the object, even though a single definition is shared by all objects of the same type.
When we call a member function, we (usually) specify the object on which the function will operate. This syntax uses the dot operator (the "." operator):
item1.same_isbn
means "the same_isbn member of the object named item1." The dot operator fetches its right-hand operand from its left. The dot operator applies only to objects of class type: The left-hand operand must be an object of class type; the right-hand operand must name a member of that type.
When we use a member function as the right-hand operand of the dot operator, we usually do so to call that function. We execute a member function in much the same way as we do any function: To call a function, we follow the function name by the call operator (the "()" operator). The call operator is a pair of parentheses that encloses a (possibly empty) list of arguments that we pass to the function.
The same_isbn function takes a single argument, and that argument is another Sales_item object. The call
item1.same_isbn(item2)
passes item2 as an argument to the function named same_isbn that is a member of the object named item1. This function compares the ISBN part of its argument, item2, to the ISBN in item1, the object on which same_isbn is called. Thus, the effect is to test whether the two objects refer to the same ISBN.
If the objects refer to the same ISBN, we execute the statement following the if, which prints the result of adding the two Sales_item objects together. Otherwise, if they refer to different ISBNs, we execute the else branch, which is a block of statements. The block prints an appropriate error message and exits the program, returning -1. Recall that the return from main is treated as a status indicator. In this case, we return a nonzero value to indicate that the program failed to produce the expected result. | http://www.informit.com/articles/article.aspx?p=384462&seqNum=5 | CC-MAIN-2018-43 | refinedweb | 1,757 | 70.43 |
Update (2017, July 24): Links and/or samples in this post might be outdated. The latest version of samples is available on new Stanford.NLP.NET site.
One more tool from Stanford NLP product line became available on NuGet today. It is the second library that was recompiled and published to the NuGet. The first one was the “Stanford Parser“. The second one is Stanford Named Entity Recognizer (NER). I have already posted about this tool with guidance on how to recompile it and use from F# (see “NLP: Stanford Named Entity Recognizer with F# (.NET)“). There are some other interesting things happen, NER is kind of hot topic. I recently saw a question about C# NER on CodeProject, Flo asked me about NER in the comment of another post. So, I am happy to make it wider available. The flow of use is as follows:
- Install-Package Stanford.NLP.NER
- Download models from The Stanford NLP Group site.
- Extract models from ’classifiers‘ folder.
- You are ready to start.
F# Sample
F# sample is pretty much the same as in ”NLP: Stanford Named Entity Recognizer with F# (.NET)” post. For more details see source code on GitHub.
let main file = let. match file with | Some(fileName) -> let fileContents = File.ReadAllText(fileName) classifier.classify(fileContents) |> Iterable.toSeq |> Seq.cast<java.util.List> |> Seq.iter (fun sentence -> sentence |> Iterable) |> Iterable.toSeq |> Seq.iteri (fun i coreLabel -> printfn "%d\n:%O\n" i coreLabel )
C# Sample
C# version is quite similar. For more details see source code on GitHub.
class Program { public static CRFClassifier. static void Main(string[] args) { if (args.Length > 0) { var fileContent = File.ReadAllText(args[0]); foreach (List sentence in Classifier.classify(fileContent).toArray()) { foreach (CoreLabel word in sentence.toArray()) { Console.Write( "{0}/{1} ", word.word(), word.get(new CoreAnnotations.AnswerAnnotation().getClass())); } Console.WriteLine(); } } else { const string S1 = "Good afternoon Rajat Raina, how are you today?"; const string S2 = "I go to school at Stanford University, which is located in California."; Console.WriteLine("{0}\n", Classifier.classifyToString(S1)); Console.WriteLine("{0}\n", Classifier.classifyWithInlineXML(S2)); Console.WriteLine("{0}\n", Classifier.classifyToString(S2, "xml", true)); var classification = Classifier.classify(S2).toArray(); for (var i = 0; i < classification.Length; i++) { Console.WriteLine("{0}\n:{1}\n", i, classification[i]); } } } }
As a result of both samples you will see the following output:
18 thoughts on “Stanford Named Entity Recognizer (NER) is available on NuGet”
When I use your example above, the methods of CRFClassifier (e.g. classifyToString classifyWithInlineXML) generate a compile-time error of “Unknown Method.” Any thoughts or can you point me in the right direction 🙂
What version of NuGet package/IKVM do you use? It works in Unit test (on my machine)
I can’t compile this code since “CRFClassifier” is not recognized by the compiler…I’ve performed all the mentioned four steps from above. Any ideas ?
Hmmm… Could you check which NuGet packages are added to your project?
when i try t decompress module file u give, it gives me an error, say “the file is corrupt”, kindly advice ?
Try files from official site – they should be OK
The same problem!
Not sure what are you talking about. is valid zip archive that has folder `\stanford-ner-2014-08-27\classifiers\` with classifiers inside.
I am a beginner on this field so please I need help I downloaded models from The Stanford NLP Group site, but I found on the classifier folder a “english.all.3class.distsim.crf.ser” file not “english.all.3class.distsim.crf.ser.gz” so please what is the problem with me.
thanks in advance
Could you share the link to the file that you downloaded?
Thanks for the code. I am getting these errors, when i Compile,
error CS0246: The type or namespace name ‘CoreLabel’ could not be found (are you missing a using directive or an assembly reference?)
error CS0246: The type or namespace name ‘CoreAnnotations’ could not be found (are you missing a using directive or an assembly reference?)
Please use sample the site and check that sample + nuget package and zip models match each other
Hi ,
I need your help.I have to fetch the data based on some keywords.
I have one text file that file having some text data , fetching the exact value based on my keywords
For Example:
In that text file having some job posting information.I need to fetch only responsibilities,experience,roles,salary etc.. | https://sergeytihon.com/2013/07/12/stanford-named-entity-recognizer-ner-is-available-on-nuget/ | CC-MAIN-2021-31 | refinedweb | 739 | 58.79 |
iText is a .NET library for creating and manipulating PDF’s. It was originally written in Java but it was also ported to .NET. The book, “iText in Actionâ€, has examples in Java only which will only be useful if a .NET developer knows this language. C# examples can be found at iText.NET site. The code needs to be slightly modified for it to compile with Visual Studio 2010.
The first thing that need to change is the library references. It uses a Java style code which may have been used in an older version of iText for .NET.
using com.lowagie.text; using com.lowagie.text.pdf;
should be written as:
using iTextSharp.text; using iTextSharp.text.pdf;
If you cut and paste the code from the examples to VS, you will see multiple errors. Change getInstance to GetInstance, open to Open, add to Add, close to Close. If your not sure about the function being used, delete it (include the period), type a period and the available functions will be listed.
The code will then compile correction in VS 2010. I recommend purchasing iText in Action because it will save you time in learning how to use iText. A knowledge of Java is beneficial, but not necessary to port the examples to .NET.
The following is a simple example of using iText with C#.
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using iTextSharp.text; using iTextSharp.text.pdf; namespace Chap0101 { class Program { static void Main (string [] args) { Console.WriteLine ("Chapter 1 example 1: Hello World"); // step 1: creation of a document-object Document document = new Document (); // step 2: // we create a writer that listens to the document // and directs a PDF-stream to a file PdfWriter.GetInstance (document, new FileStream ("Chap0101.pdf", FileMode.Create)); // step 3: we open the document document.Open (); // step 4: we add a paragraph to the document document.Add (new Paragraph ("Hello World")); // step 5: we close the document document.Close (); } } }
I am not a regular developer in C#, that is why i faced the issue compiling. This article simply solved my problem.
Thankyou so much. | http://www.cyprich.com/blog/2011/08/16/hello-world-example-in-itext/ | CC-MAIN-2021-49 | refinedweb | 362 | 69.99 |
.
Got my RSVP for the event:
The location kind of sucks since I work downtown and life on 1st avenue but I will have to manage with the bus.
".NET implementation of the ALICE chatterbot using the AIML specification ...snip.... Put simply, this software will allow you to chat with your computer using natural language.".
Here is a block in my tasks section of my ccnet.config file. When searching around I couldnt find out the exact markup anywhere so I thought I would share it.
<exec> <executable>C:\Program Files\NCover\NCover.Console.exe</executable> <baseDirectory>c:\working\SampleApp</baseDirectory> <buildArgs>c:\nunit\bin\nunit-console.exe SampleApp.nunit /xml:nunit.xml //x NCoverSampleApp.xml //w c:\working\SampleApp</buildArgs> <buildTimeoutSeconds>360</buildTimeoutSeconds></exec>
Scott Stauffer with the local SQL Pass group managed to get Donald Farmer up to Vancouver for the meeting last night. If you don’t know Donald, he is the PGM at Microsoft for the Sql Server Business Intelligence Platform.
I have to say Donald is one of the more entertaining presenters I have had the opportunity to listen to live. His insights into the product drawing from real-world examples really help clear up the subject matter. This guy really knows his stuff!
The group hit record attendance last night as well, just under 100 people. Nice to see so many SQL junkies in my area.
There were two folks there (forgive me if I don’t get your names correct), Sergio and Michael that worked on a side project which they wanted to get more exposure into the community. I guess they have an ecommerce application ready to go. If one of you are reading this, use the Contact link to get ahold of me and we can talk about getting you guys a slot of time for the Vancouver Code Camp.
I also have been contemplating creating a SQL Track for the Vancouver Code Camp as well. The only problem is that I do not have enough SQL Presenters. I would need to find 6 people that are willing to give a session in that track. If you are interested please contact me ASAP. I’m going to hit up Donald and see if he can pass this request off to anyone/everyone in the SQL group at Microsoft. If not, there is always next year!
I will post pictures once Scott gets around to sending them.:
I'm posting this mostly to bookmark it for myself, but to also share with everyone. Pretty cool way to display images.
I will be using this in at least two upcoming projects.
The instructor covered more of bare bones C# stuff. To tell you the truth, this is how I taught myself C# as well. I opened up notepad and started typing. Copied examples from online sources (the few that was available back then), and use the csc.exe compiler to start compiling down my code. I think this process is great for everyone to understand. It really shows them the background of what VS.NET hides from you, and what options that may not be so visible in that IDE.
Allot of the OOP specific keywords were covered. sealed, inheritance, override, virtual, new, abstract, base, this, interfaces, the is operator, const vs readonly, arrays, switch (cascading rules), break, documentation generation, commenting, foreach, System.Convert, try/catch/finally, looping constructs, string class, tryparse(), using “@” with string literals, System.IO, MapPath(), reading and writing text files, close, dispose, file uploading; The “Main” entry point in a console application. A simple windows form application, which shows a basic MesageBox.
Next they got an introduction to some of the more common .NET Namespaces, and what the heck namespaces are in the first place! (think java packages). Oh and how to import the assemblies with the “using” keyword and the /r switch on the compiler.
More information I felt could benefit the students
ref vs out : (Section 8.3)
Here are portions of that section:
A reference parameter is used for “by reference” parameter passing, in which the parameter acts as an alias for a
caller-provided argument. A reference parameter does not itself define a variable, but rather refers to the variable of
the corresponding argument. Modifications of a reference parameter impact the corresponding argument. A
reference parameter is declared with a ref modifier.
The ref keyword shall be used in both the declaration of the formal parameter and in uses of it. The use of ref at the
call site calls special attention to the parameter, so that a developer reading the code will understand that the value of
the argument could change as a result of the call.
An output parameter is similar to a reference parameter, except that the initial value of the caller-provided
argument is unimportant. An output parameter is declared with an out modifier.
For value, reference, and output parameters, there is a one-to-one correspondence between caller-provided
arguments and the parameters used to represent them.
params keyword:
Also in Section 8.3 of the ECMA Specification
A parameter array enables a many-to-one relationship: many arguments can be represented by a single parameter
array. In other words, parameter arrays enable variable length argument lists. A parameter array is declared with a
params modifier. There can be only one parameter array for a given method, and it shall always be the last
parameter specified. The type of a parameter array is always a single dimensional array type. A caller can either pass
a single argument of this array type, or any number of arguments of the element type of this array type.
You can also find more on all three of these (out, ref, params),in more detail, in Section 17.5.1 of the spec.
When to call Dispose()
The easiest way to find out this answer is to go a quick google search: The recommended best practice is to ALWAYS call dispose. So be careful and start watching out for the Dispose() method and call it!
aspnet_regiis
During the session we were shown a demo which the IIS Application was set to use v1.1 of the .NET Framework. This class is focusing on v2, using Visual Studio 2005, so there was a big bad error on the overheard. The instructor intentionally did this in order to show the students exactly what the error meant, and how to fix it. He actually had to login as admin to modify IIS. I wanted to add that the students could have simply used aspnet_regiis to fix the issue. Dump to a DOS command prompt. Make sure that the SDK binaries are in your path, or just use the “SDK Command Prompt” which should be in your start menu. Type in “aspnet_regiis /?” to get a complete listing of all switch’s.
Peter is organizing a Vancouver Code Camp Geek Dinner for those that are interested.
I'm going, are you? | http://weblogs.asp.net/rchartier/archive/2006/01.aspx | crawl-002 | refinedweb | 1,161 | 65.52 |
Contents. There is documentation for the IDLE debugger at.:
>>> list(zip([1, 2, 3], [4, 5, 6])) [(1, 4), (2, 5), (3, 6)]
or to compute a number of sines:
>>> list() and the format() methods on string objects. Use regular expressions only when you’re not dealing with constant string patterns..)
In the unlikely case that you care about Python versions older than 2.0, use apply():
def f(x, *args, **kwargs): ... kwargs['width'] = '14.3c' ... apply(g, (x,)+args, kwargs)...
Yes, this feature was added in Python 2.5. The syntax would be as follows:
[on_true] if [expression] else [on_false] x, y = 50, 25 small = x if x < y else y
For versions previous to 2.5 the answer would be ‘No’. ‘if’.
The best course is usually to write a simple if...else statement. Another solution is to implement the ?: operator as a function:
def q(cond, on_true, on_false): if cond: if not isfunction(on_true): return on_true else: return on_true() else: if not isfunction(on_false): return on_false else: return on_false()
In most cases you’ll pass b and c directly: q(a, b, c). To avoid evaluating b or c when they shouldn’t be, encapsulate them within a lambda function, e.g.: q(a, lambda: b, lambda: c).
It has been asked why Python has no if-then-else expression. There are several answers: many languages do just fine without one; it can easily lead to less readable code; no sufficiently “Pythonic” syntax has./3) yields '0.333'.('u', s) >>> print(a) array('u', 'Hello, world') >>> a[0] = 'y' >>> print(a) array('u', 'yello world') >>> a.tounicode() 'yello, world'.
Starting with Python 2.2,.
For older versions of Python, there are two partial substitutes:.
Use the reversed() built dictionary keys (i.e. they are all hashable) this is often faster
d = {} for x in mylist: d[x] = 1 mylist = list(d.keys())
In Python 2.5 and later, the following is possible instead:..
A method is a function on some object x that you normally call as x.name(arguments...). Methods are defined as functions inside the class definition:
class C: def meth (self, arg): return arg * 2 + self.attribute since Python 2.2:
class C: def static(arg1, arg2, arg3): # No 'self' parameter! ... static = staticmethod(static)
With Python 2.4’s decorators, this can also be written as. –('abc.py')
This will write the .pyc to the same location as abc()' | http://docs.python.org/release/3.1.3/faq/programming.html | CC-MAIN-2013-48 | refinedweb | 405 | 77.33 |
Back in February I blogged about a strange case we had seen where a customer was having trouble seeing certain ASP.NET performance counters when using WMI to access them. If you start Perfmon with the /WMI switch then Perfmon uses WMI rather than the native performance counter APIs to read the data. This is the same method that is used by various system monitoring tools both from Microsoft and other companies. Therefore not being able to get this data via WMI can be a significant problem.
Well, we recently had a case where after installing a new ASP.NET hotfix that we developed for our customer they could see the counters listed but the values were showing as 0 in the case of non-instance counters and no instances were listed for instance based counters.
This caused us quite a headache and we could not reproduce the issue in-house. However in the end we did find a machine on which we reproduced the issue and did manage to figure out a workaround that worked for us and in turn for the customer. However I then stopped being able to reproduce the issue so I was not able to get to the root cause. Very frustrating.
The hotfix in question is this one which (as it happens) is for an ASP.NET performance counter related issue:
FIX: "ASP.NET Apps v2.0.50727(__Total__)Sessions Active" performance counter shows an unreasonably high value after Microsoft .NET Framework 3.5 Service Pack 1 is installed
Now although this fix was in the area of ASP.NET performance counters the minor code change it involved should not have affected the registration of the counters in any way. And in fact the WMI classes that are involved in provided performance counter data via WMI are dynamically generated by the WMI infrastructure of the operating system and not ASP.NET itself. So we could see no way that the fix that was done should cause this issue with the counters.
Just to check if it was some kind of one off packaging issue we also tried a later fix package:
FIX: Error message when you run an ASP.NET 2.0 Web application if the global resource file name contains the culture name "zh-Hant"
But the customer found the same problem happened.
What we found worked in the end was a slight variation of the steps that I talked about in my previous blog post. Here are the new steps:
1. Open Wbemtest.exe
2. Connect to the rootcimv2 namespace.
3. Delete the Win32_PerfRawData_ASPNET_2050727_ASPNETAppsv2050727 class.
4. Winmgmt /resyncperf
5. Net stop winmgmt
6. Net start winmgmt
After doing this the counters had values and instances as expected when accessed via WMI.
HTH
Doug
One of my readers contacted me to say that they managed to fix an issue with missing CLR performance counters by following the steps using LODCTR described in:
(even though this applies to an old version of the framework).
This raises a good point. There are a variety of ways that extensible counters can go wrong and LODCTR is a good tool for fixing such issues.
Here are some other articles related to fixing performance counters (some old but still interesting):
Also, if you search on the Microsoft suppport site for LODCTR you will find many others, for various products:
HTH
Doug | https://blogs.msdn.microsoft.com/dougste/2009/07/23/more-on-the-mysterious-case-of-accessing-net-performance-counters-using-wmi/ | CC-MAIN-2019-13 | refinedweb | 563 | 71.04 |
Hi,
Our application requires the push-down sql from named/dynamic query be the same each
time a same JPQL is executed. In the following JPQL example,
query="UPDATE BasicA t set t.name= ?1, t.age = ?2 WHERE t.id = ?3"
we observe two different push-down sql could be generated:
(1) UPDATE PDQBasicA t0 SET name = ?, age = ? WHERE (t0.id = ?)
(2). Is there an alternative way to address this issue? Should
a JIRA issue be open to fix this problem? Any comment is appreciated.
In QueryExpressions:
/**
* Add an update.
*/
public void putUpdate(Path path, Value val) {
if (updates == Collections.EMPTY_MAP)
updates = new HashMap(); <== change HashMap to LinkedHashMap
updates.put(path, val);
}
In JPQLExpressionBuilder:
JPQLNode[] findChildrenByID(int id) {
Collection set = new HashSet(); <== change HashSet to LinkedHashSet
findChildrenByID(id, set);
return (JPQLNode[]) set.toArray(new JPQLNode[set.size()]);
}
Regards,
Fay
____________________________________________________________________________________
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ | http://mail-archives.apache.org/mod_mbox/openjpa-dev/200805.mbox/%3C127214.17357.qm@web55803.mail.re3.yahoo.com%3E | CC-MAIN-2015-48 | refinedweb | 158 | 52.36 |
This is light-hearted contribtion was written for and performed at the May 2000 W3C AC meeting dinner. At the time a debate had been raging at which one of the questions at stake was whether an XML namespace should be considered a web resource.
Up to Design Issues
In his book
Goedel, Escher, Bach, the computer scientist Douglas
Hofstadter ruminates on self-referential systems. At times, he uses the
approach of a Socratic dialogue between two characters from Xeno's fable,
Achilles and the Tortoise. The conclusion of several hundred pages of
musings around Bach's fugues, Escher's recusive drawings, and Goedel's theorem
are that you can't try to distinuish wishes from metawishes,
or the whole system breaks down. Without drawing too many parallels with the
recent XML-URI discusssions, we would like to relate a conversaion between
Achilles and the famous tortoise, recently overheard in a library.
[Achilles and the Tortoise are each strolling in the library. They meet.]
Achilles: Ah, Mr. Tortoise, I thought I might find you in the library
T: And a very nice library it is too, Achilles.
A: Thank you. It was a communal effort. As were the books. There are so many really beautiful books in the library.
T: And now we have dictionaries!
A: Yes, dictionaries are very important to me, Mr.. Tortoise. I want to use them to understand what some of those books mean.
T: Let's not discuss meaning, please Achilles -- you know what happens when we do that! I want to use these dictionaries in order to check that the books are correct.
A: Well, at least we are agreed that dictionaries are a good idea.
[they round a corner]
T: Achilles, what is that?!
A: Why, a dictionary, Mr. T.
T: But it is in the library! I thought when we defined dictionaries we agreed it was "not a goal" to register dictionaries in the library!
A: But surely that doesn't stop me putting one in the library?
T: Irony heaped on Irony! The Library is for books. That you should abuse it so! A dictionary is not a book. It is a metabook.
A: What? Of course it is book!
T: You said that you wanted it have the form of a book so we make them out of paper -- but that doesn't mean the intent was to put it in the library!
A: But this is my section of the library -- it is the section on Library Architecture and I need a dictionary to define the terms used in that field.
T: But you know that people can loose things in a library, and libraries can burn down ... there are so many reasons that dictionaries should not be in the in the library, Achilles!
A: Look at this way, Mr. Tortoise: when I am doing research in the library, I need to be able to look up words, and so I need a dictionary in the library.
T: You have some woolly notion of finding out what books mean, Achilles, but we haven't agreed about that. The meaning of the semantics of "meaning" are not a consensus in current linguistic epistemorthosemantisophologic theory.
A: I don't need to go into that, but I need a place for dictionaries.
T: Oh, we have all been discussing where dictionaries should go. We have plenty of ideas: We have plans for a new vault building down the road much more secure than this library. We have that white tower on the hill we could use too.
T: Besides, in practice, most of us keep a pocket dictionary for each language we use in our briefcases. It isn't as though we need so many dictionaries. Frankly, dictionaries have such different requirements to books I am shocked to see this dictionary in your section of the library! If you don't take it out out, I will bite your heel.
A: But I thought when we designed the library it was so that any sort of book could go in it. That is why we called it the Global Eternal Bibliotech, after all: it is Good for Every Book. I should be able to keep this dictionary in it simply because it is a book.
T: But Achilles, for the last time, a dictionary is not a book!
With apologies & thanks to Douglas Hofstadter for taking us through the fun (and inevitability) of self-referential systems. Thanks to Ian Jacobs for playing Achilles at the dinner.
Tim Berners-Lee | http://www.w3.org/DesignIssues/NamespacesAreResources.html | crawl-001 | refinedweb | 757 | 82.44 |
note sundialsvc4 <p> It is my understanding that the names should agree because Perl looks for module-files by matching the file name. It reads whatever it finds, of course <em>expecting</em> that the desired package-name will be defined thereby. </p><p> However, the formal purpose of the <tt>package</tt> directive is to introduce a <em>namespace,</em> and once Perl has been cajoled into reading a source-file, it will of course recognize all of the package (namespace) names found therein. Sometimes there are very good, even compelling, reasons to do just that ... for instance, when you are defining grammars for [mod://Parse::RecDescent] (and I don’t know offhand if it is strictly necessary ...), or when you simply have a group of classes that you know will always be used together to help one another out. </p> 1014776 1014776 | http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1015011 | CC-MAIN-2017-04 | refinedweb | 144 | 60.95 |
Context switches are not free. But how expensive are they? I wrote a small program to find out, and I’m sharing the program and its results here.
I focused on purely context switches (no work is actually performed between context switches). So it’s not a real-world scenario, but it really brings out the hidden costs. Below are the results 500,000 context switches performing no work between each one.
Executing 500000 work cycles of 0 iterations each, in different ways... Scenario Total time (ms) Time per unit (µs) No-switch 0 0.0002 Async w/o yield 67 0.0353 Async w/ yield 664 0.349 Thread switches 5215 2.7412
Notice how with each kind, the order of magnitude of the overhead increases. The code below will help you understand what each each scenario name actually means. Then we add a bit of work (counting to 500) per context switch, which is closer to a possible real-world work load (although relatively lightweight) that might occur for a given context:
Executing 500000 work cycles of 500 iterations each, in different ways... Scenario Total time (ms) Time per unit (µs) No-switch 380 0.1998 Async w/o yield 368 0.1935 Async w/ yield 832 0.4374 Thread switches 5185 2.7257
Suddenly no context switch and async methods all share an order of magnitude, while thread switches still takes significantly longer. In fact closely comparing shows that Async w/o yield is faster than no switch at all. This of course is ludicrous and can be written off as noise. But several runs produced the same result, so we can glean from this that when doing even a small amount of work per context switch, that the no-yield async method adds insignificant overhead.
Following is the application that produced the above results.
using System; using System.Diagnostics; using System.Threading; using System.Threading.Tasks; class Program { const int unitSize = 500; const int workSize = 500000; const string spacing = "{0,-20}{1,-20}{2,-20}"; private static void Main(string[] args) { Console.WriteLine("Executing {0} work cycles of {1} iterations each, in different ways...", workSize, unitSize); Console.WriteLine(spacing, "Scenario", "Total time (ms)", "Time per unit (μs)"); Scenario("No-switch", DoSync); Scenario("Async w/o yield", DoAsyncNoYield); Scenario("Async w/ yield", DoAsyncWithYield); Scenario("Thread switches", ThreadSwitch); } static void Scenario(string name, Action operation) { GC.Collect(); operation(); // warm it up var timer = Stopwatch.StartNew(); operation(); timer.Stop(); Console.WriteLine(spacing, name, timer.ElapsedMilliseconds, MicroSecondsPerItem(timer)); } static void ThreadSwitch() { int workRemaining = workSize; var evt = new AutoResetEvent(true); ThreadStart worker = () => { while (workRemaining > 0) { evt.WaitOne(); workRemaining--; WorkUnit(); evt.Set(); } }; var threads = new Thread[Environment.ProcessorCount]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread(worker); threads[i].Start(); } for (int i = 0; i < threads.Length; i++) { threads[i].Join(); } } static void DoAsyncNoYield() { var tcs = new TaskCompletionSource<object>(); tcs.SetResult(null); var task = tcs.Task; Task.Run( async delegate { int workRemaining = workSize; while (--workRemaining >= 0) { await NoYieldHelper(task); } }).Wait(); } static async Task NoYieldHelper(Task task) { WorkUnit(); await task; } static void DoAsyncWithYield() { Task.Run( async delegate { int workRemaining = workSize; while (--workRemaining >= 0) { WorkUnit(); await Task.Yield(); } }).Wait(); } static void DoSync() { int workRemaining = workSize; while (--workRemaining >= 0) { WorkUnit(); } } private static double MicroSecondsPerItem(Stopwatch timer) { var ticksPerItem = (double)timer.ElapsedTicks / workSize; var microSecondsPerItem = TimeSpan.FromTicks((long)(ticksPerItem * 1000)).TotalMilliseconds; return microSecondsPerItem; } static void WorkUnit() { for (int i = 0; i < unitSize; i++) { } } }
Join the conversationAdd Comment
My numbers do not show superiority of Async without yield.
No-switch is faster on my machine.
Scenario Total time (ms) Time per unit (?s)
No-switch 21 0.0621
Async w/o yield 49 0.1414
Async w/ yield 391 1.121
Thread switches 1195 3.4227
Another point – evt.Set(); can be expensive itself.
to my knowledge WaitHandles have interop calls inside because they use native API and they are machine wide.
so the timing includes not only the cost of thread switch but cost of interop operations as well. | https://blogs.msdn.microsoft.com/andrewarnottms/2012/12/28/the-cost-of-context-switches/ | CC-MAIN-2016-44 | refinedweb | 667 | 61.22 |
In my introductory article, I went over the basics of the Ember.js framework, and the foundational concepts for building an Ember application. In this follow-up article, we'll dive deeper into specific areas of the framework to understand how many of the features work together to abstract the complexities of single-page application development.
A Basic App
I noted previously that the easiest way to get the files you need is to go to the Ember.js Github repo and pull down the start kit, and that still holds true. This boilerplate kit includes all the files that you'll need to kickstart your Ember experience, so be sure to download it from this article.
The interesting thing is that the starter kit is also a great example of a very basic Ember app. Let's walk through it to gain an understanding of what's going on. Note that I'll be digging deeper into specific areas later, so don't worry if something doesn't make immediate sense in this section. It's more to give you a high-level understanding of the functionality before diving into the details.
Open
index.html in your browser, and you'll see the following:
Welcome to Ember.js
- red
- yellow
- blue
This is not very exciting, I know, but if you look at the code that rendered this, you'll see that it was done with very little effort. If we look at "js/app.js", we see the following code:
App = Ember.Application.create({}); App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });
At its most basic level, an Ember app only needs this one line to technically be considered an "app":
App = Ember.Application.create({});
This code sets up an instance of the Ember application object, along with a default application template, event listeners and application router. Take a second and try to think of the code you would normally have to write to create a global namespace, a client-side template, bind event handlers for global user interaction and include history & state management in your code. Yes, that one line does all of that. Let's be clear, though: I'm not saying that it's doing all of the work for you, but it is creating the foundation you'll build upon, via one method call.
The next set of code sets up the behavior of a route, in this case, for the main
index.html page:
App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });
Remember that routes are used to manage the resources associated with a specific URL within the application, and allows Ember to track the various states of individual pages. The URL is the key identifier that Ember uses to understand which application state needs to be presented to the user.
In this case, the root route is created by default in Ember. I could've also explicitly defined the route this way:
App.Router.map( function() { this.resource( 'index', { path: '/' } ); // Takes us to "/" });
But Ember takes care of that for me for the "root" of my application. We'll tackle routes in more detail later.
Going back to the following code:
App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });
In this case, when a user hits the site's root, Ember will setup a controller that will load a sample set of data with a semantic name, called
content. This data can later be used in the app, via this controller using that name. And that's specifically what happens in
index.html. Open the file and you'll find the following:
<script type="text/x-handlebars" data- <h2>Welcome to Ember.js</h2> <ul> {{#each item in model}} <li>{{item}}</li> {{/each}} </ul> </script>
This is a Handlebars client-side template. Remember that Handlebars is the templating library for Ember, and is vital to creating data-driven user interfaces for your app. Ember uses data attributes to link these templates to the controllers that manage your data, whether they're specified via a route or as a standalone controller.
In my last article, I mentioned that naming conventions are important in Ember, and that they make connecting features easy. If you look at the template code, you'll see that the name of the template (specified via the data-template-name attribute) is "index". This is purposeful and is meant to make it easy to connect to the controller specified within the route of the same name. If we look at the route code once again, you'll see that it's called "IndexRoute" and inside of it is a controller with data being set:
App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });
The controller sets a datasource named "content" and loads it with an array of strings for the colors. Basically, the array is your model, and the controller is used to expose that attributes of the model.
The naming conventions allow Ember to link this route's resources (e.g.: the controller with data) to the template specified by the same name. This gives the template access to the data exposed by the controller so it can render it using Handlebars' directives. From there, the items in the array are looped over using Handlebars' each directive and specifying the alias model which points to the datasource:
{{#each item in model}} <li>{{item}}</li> {{/each}}
To be more precise, the data is populated into dynamically created list items, thus generating the markup for you on the fly. That's the beauty of client-side templates.
I think this basic app highlights how Ember abstracts a lot of things for you. It is a bit of black magic though and it's not always easy to grasp how things work. That actually happened to me and things didn't quite click at first. Once you start understanding the relationships between the various components of the framework, it starts to make more sense. Let's start from the ground up to get a better understanding of this.
Starting from the Ground Up
I briefly touched on the Ember application object and the fact that it builds the foundation for your application. The Ember guides do an excellent job of outlining specifically what instantiating an Ember application object does:
- It sets your application's namespace. All of the classes in your application will be defined as properties on this object (e.g.
App.PostsViewand
App.PostsController). This helps to prevent polluting the global scope.
- It adds event listeners to the document and is responsible for sending events to your views.
- It automatically renders the application template, the root-most template, into which your other templates will be rendered.
- It automatically creates a router and begins routing, based on the current URL.
So this simple statement:
App = Ember.Application.create({});
wires up a whole ton of foundational pieces that your application will depend on. It's important to note that App is not a keyword in Ember. It's a normal global variable that you're using to define the namespace and could be any valid variable name. From what I've seen, though, the variable name, App, is a commonly used convention in most Ember apps and is actually recommended to make it easier to copy and paste much of the sample code being created in the community.
Taking the list above, what Ember does, via that one line, is essentially create this code for you automatically behind the scenes:
// Create the application namespace App = Ember.Application.create({}); // Create the global router to manage page state via URLs App.Router.map( function() {}); // Create the default application route to set application-level state properties App.ApplicationRoute = Ember.Route.extend({}); // Create the default application template <script type="text/x-handlebars" data- {{outlet}} </script>
So, while the starter kit didn't explicitly define an application-scoped router, route or template, Ember ensured that they're created and available so that the foundation of your app is set and available to you. It's definitely okay to explicitly create the code. In fact, you may want to do so if you plan to pass data or set attributes for your instance of the application object.
Now you might be wondering about this "application template" getting automatically rendered and why you don't see it in
index.html. That's because it's optional to explicitly create the application template. If it's in the markup, Ember will immediately render it. Otherwise, it carries on processing other parts of your application as normal. The typical use-case for the application template is defining global, application-wide user interface elements, such as header and footers.
Defining the application template uses the same style syntax as any other template except with one small difference: the template name doesn't need to be specified. So defining your template like this:
<script type="text/x-handlebars"> <h1>Application Template</h1> </script>
or this:
<script type="text/x-handlebars" data- <h1>Application Template</h1> </script>
gives you the same exact results. Ember will interpret a template with no data-template-name as the application template and will render it automatically when the application starts.
If you update
index.html by adding this code:
<script type="text/x-handlebars" data- <h1>Application Template</h1> {{outlet}} </script>
You'll now see that the contents of the header tag appears on top of the content of the index template. The Handlebars {{outlet}} directive serves as a placeholder in the application template, allowing Ember to inject other templates into it (serving as a wrapper of sorts), and allowing you to have global UI features such as headers and footers that surround your content and functionality. By adding the application template to
index.html, you've instructed Ember to:
- Automatically render the application template
- Inject the index template into the application template via the Handlebars
{{outlet}}directive
- Immediately process and render the
indextemplate
An important takeaway is that all we did was add one template (application), and Ember immediately took care of the rest. It's these feature bindings that make Ember.js such a powerful framework to work with.
Setting up Routes
Routing is arguably the most difficult concept to understand in Ember, so I'll do my best to break it down to manageable steps. As a user navigates your application, there needs to be a method for managing the state of the various parts the user visits. That's where the application's router and location-specific routes come in.
The Ember router object is what manages this through the use of routes that identify the resources needed for specification locations. I like to think of the router as a traffic cop that's directing cars (users) to different streets (URLs & routes). The routes, themselves, are tied to specific URLs and, when the URL is accessed, the routes resources are made available.
Looking at
js/app.js again, you'll notice that a route has been created for the root page (index):
App.IndexRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['red', 'yellow', 'blue']); } });
However, there's no router instance. Remember that Ember will create a router by default if you don't specify one. It will also create a default route entry for the root of the application similar to this:
App.Router.map( function() { this.resource( 'index', { path: '/' } ); });
This tells Ember that, when the root of the application is hit, it should load the resources of a route object instance called IndexRoute if it's available. This is why, despite no router instance being declared, the application still runs. Ember internally knows that the root route should be named IndexRoute, will look for it, and load its resources, accordingly. In this case, it's creating a controller that will contain data to be used in the index template.
Since URLs are the key identifiers that Ember uses to manage the state of your application, each one will generally have their own route handler specified if resources need to be loaded for that section of the app. Here's what I mean; suppose that you have an app with three sections:
- Account: (URL: /account)
- Profile (URL: /profile)
- Gallery (URL: /gallery)
In most cases, each one of these sections will have its own unique resources that need to be loaded (e.g.: data or images). So you would create route handlers using the resource() method within Ember's application router object instance like this:
App.Router.map( function() { this.resource( 'accounts' ); this.resource( 'profiles' ); this.resource( 'gallery' ); });
This allows Ember to understand the structure of the application and manage resources, accordingly. The routes definitions will correlate to individual route object instances which actually do the heavy-lifting like setting up or interfacing controllers:
App.GalleryRoute = Ember.Route.extend({ setupController: function(controller) { controller.set('content', ['pic-1.png', 'pic-2.png', 'pic-3.png']); } });
So in the example above, when a user visits "/gallery", Ember.js instantiate the GalleryRoute route object, setup a controller with data and render the gallery template. Again, this is why naming conventions are so important in Ember.
Your application may also have nested URLs, like /account/new
For these instances, you can define Ember resources that allow you to group routes together, like so:
App.Router.map( function() { this.resource( 'accounts', function() { this.route( 'new' ); }); });
In this example, we used the
resource() method to group the routes together and the
route() method to define the routes within the group. The general rule of thumb is to use
resource() for nouns (Accounts and Account would both be resources even when nested) and
route() for modifiers: (verbs like
new and
edit or adjectives like
favorites and
starred).
Apart from grouping the routes, Ember builds internal references to the controllers, routes and templates for each of the group routes specified. This is what it would look like (and again it touches on Ember's naming conventions):
"/accounts":
- Controller: AccountsController
- Route: AccountsRoute
- Template: accounts (yes it's lowercase)
"/accounts/new":
- Controller: AccountsNewController
- Route: AccountsNewRoute
- Template: accounts/new
When a user visits "/accounts/new" there's a bit of a parent/child or master/detail scenario that occurs. Ember will first ensure that the resources for accounts are available and render the accounts template (this is the master part of it). Then, it will follow-up and do the same for "/accounts/new", setting up resources and rendering the accounts.new template.
Note that resources can also be nested for much deeper URL structures, like this:
App.Router.map( function() { this.resource( 'accounts', function() { this.route( 'new' ); this.resource( 'pictures', function() { this.route( 'add' ); }); }); });
Next Steps
I've covered a lot of material in this post. Hopefully, it has helped to simplify some of the aspects of how an Ember application functions and how routes work.
We're still not finished, though. In the next entry, I'll dive into Ember's features for pulling back data and making it available with your app. This is where models and controllers come in, so we'll focus on understanding how the two work together.
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| http://code.tutsplus.com/tutorials/getting-into-ember-js-part-2--net-31132 | CC-MAIN-2015-32 | refinedweb | 2,562 | 54.42 |
<ac:macro ac:<ac:plain-text-body><![CDATA[
Framework: Zend_Auth_Adapter_Cas Component Proposal
Table of Contents
1. Overview
2. References
.
3. Component Requirements, Constraints, and Acceptance Criteria
- This component will implement Zend_Auth_Adapter_Interface.
- This component will authenticate CAS ticket against CAS server.
- This component will return Zend_Auth_Result::SUCCESS when CAS ticket is valid.
- This component will return Zend_Auth_Result::FAILURE when CAS ticket is invalid.
- This component must support optional SSL / TLS encrypted transport.
- This component will not handle user redirects.
- This component will not save any data.
4. Dependencies on Other Framework Components
- Zend_Http_Client
- simplexml_load_string with support for namespaces:
5. Theory of Operation
CAS authentication is typically handled by a remote server.
6. Milestones / Tasks
This CAS authentication adapter is capable of working with CAS versions 1, 2 and 3
- Milestone 1: [DONE] Initial proposal published for review.
- Milestone 2: [DONE] Working prototype and some examples.
- Milestone 3: [DONE] Working prototype checked into the incubator supporting use cases.
- Milestone 4: Unit tests exist, work, and are checked into SVN.
- Milestone 5: Initial documentation exists.
7. Class Index
- Zend_Auth_Adapter_Cas
8. Use Cases
Load parameters from Zend_Config:
cas.ini
or load directly:
Authenticate in an action
Standalone instance for non-MVC environments
9. Class Skeletons]]></ac:plain-text-body></ac:macro>]]></ac:plain-text-body></ac:macro>
35 Commentscomments.show.hide
Jan 07, 2009
Tim Steiner
<p>I'd definitely have interest in using this. Our university beginning to embrace CAS for web app authentication. I'm currently using the PEAR CAS library, but having CAS integrated into ZF would be great!</p>
Feb 12, 2009
Martin Cleaver
<p>Hi Teemu,<br />
Is it fair to believe this code is only in the proposal stage, and is not at all ready for production use?</p>
<p> Just wondering... as we'd like to use it, but as per <a class="external-link" href=""></a> it looks like development didn't yet progress.</p>
<p>Thanks,<br />
Martin</p>
Feb 12, 2009
Martin Cleaver
<p>I note <a class="external-link" href=""></a>, which could be a duplicate.</p>
Apr 02, 2009
Micah Sutton
<p>This proposal seems stuck...I've created an implementation and example file, but can't modify this proposal.</p>
May 11, 2009
Jeremy Postlethwaite
<p>I updated this page to include a functional class which I have in use at UC Davis <a class="external-link" href=""></a></p>
<p>This class should be interoperable with CAS 1, 2 and 3.</p>
Jul 02, 2009
Greg Gomez
<p>Hello:</p>
<p>What's involved with getting this into the release version?</p>
<p>I've got it running in a test environment with no problems so far.</p>
<p>Thanks!<br />
Greg</p>
Jul 04, 2009
Jeremy Postlethwaite
<p>We just need to get more people to test it and make sure it works.</p>
<p>I am glad to hear that it is working just fine.</p>
<p>Just curious, what version of CAS are you using?</p>
<p>I am using it on CAS version 3.</p>
<p>Thanks,</p>
<p>Jeremy</p>
Jul 06, 2009
Greg Gomez
<p>Hi, Jeremy:</p>
<p>My systems folk tell me we're running 3 as well.</p>
<p>Thanks!<br />
Greg</p>
Jul 30, 2009
Jeremy Postlethwaite
<p>I moved this into Ready for Recommendation to get this into the code base.</p>
Jul 31, 2009
Greg Gomez
<p>Hi, Jeremy:</p>
<p>Great! Let me know if there's anything I can do to help.</p>
<p>Greg</p>
Jul 31, 2009
Jeremy Postlethwaite
<p>I added a standalone version to this script for people who like to use only the Zend_Auth_Adapter_Cas class and not the MVC features of Zend.</p>
<p>All one needs to do is take "UC-02 in non-MVC," save it as a single web page and edit your CAS configuration:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
$config = array(
'hostname' => 'cas.example.org',
'port' => 443,
'path' => 'cas/',
);
]]></ac:plain-text-body></ac:macro>
<p>I have set this up so you need the following directory structure:</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
example.org
`--public
`--index.php
]]></ac:plain-text-body></ac:macro>
<p>You will need the entire Zend Library, I have just shown where to put your key files.</p>
<p>The class Zend_Auth_Adapter_Cas would be set in the folder:</p>
<p>example.org/library/Zend/Auth/Adapter/Cas.php</p>
<p>The UC-02 web page would be put into:</p>
<p>example.org/public/index.php</p>
Aug 20, 2009
Matthew Weier O'Phinney
<ac:macro ac:<ac:parameter ac:Zend Acceptance</ac:parameter><ac:rich-text-body>
<p>This proposal is accepted for immediate development in the standard incubator, with the following requirements:</p>
<ul>
<li>Coding standards:
<ul>
<li>getLogin/LogoutURL methods: URL -> Url</li>
</ul>
</li>
<li>Design issues:
<ul>
<li>Add getters for all setters; use getters internally in all methods. This offers better encapsulation.</li>
<li>Don't access superglobals directly when possible; these values should be passed in.
<ul>
<li>Add methods:
<ul>
<li>(get|set)QueryParam(s)</li>
<li>(get|set)SelfUrl()</li>
</ul>
</li>
<li>Allow passing each of these via constructor:
<ac:macro ac:<ac:default-parameter>php</ac:default-parameter><ac:plain-text-body><![CDATA[
$adapter = new Zend_Auth_Adapter_Cas(array(
'selfUrl' => $request->getRequestUri(),
'queryParams' => $request->getQuery(),
// ...
));
]]></ac:plain-text-body></ac:macro></li>
<li>Only if the values are not passed, should they be retrieved from the relevant superglobal</li>
</ul>
</li>
</ul>
</li>
</ul>
</ac:rich-text-body></ac:macro>
Sep 29, 2009
Jeremy Postlethwaite
<p>I have committed the adapter to the incubator.</p>
<p>I will begin working on Matthew's suggested changes.</p>
Dec 16, 2009
Jeremy Postlethwaite
<p>I have updated the CAS authentication module.</p>
Apr 06, 2010
Jay Klehr
<p>Jeremy, thanks for your work on this adapter so far, I just started playing with it (talking to a RubyCAS server).</p>
<p>Things seem to work well, but following the use case above left me a little bit short when first implementing, so I wanted to mention what I had to do so it can either be documented, or perhaps something altered so it functions as the use case demonstrates.</p>
<p>calling ->hasTicket() after being redirected from the CAS login URL would just result in an endless redirect loop for me (the CAS server is correctly appending the ticket to the GET param "ticket").</p>
<p>In my controller plugin (preDispatch) I had to add the following calls before the ->hasTicket() call in order to get this to work as expected:</p>
<p>$adapter->setQueryParams($request->getQuery());<br />
$adapter->setTicket();</p>
<p>This would then correctly populate the adapter with the right session ticket from the query string, and then store it in the protected _ticket property, thus stopping the endless redirect.</p>
Feb 10, 2011
Lance Parsons
<p>Thanks for the notes Jay. I had the same problem and adding those two lines seems to have fixed things. It seems these should be added to the use case example(s) above. Or am I doing something incorrectly?</p>
Apr 27, 2010
Andrew Sharpe
<p>Hi All,</p>
<p>Am I correct in assuming that the working code for this proposal in only available via cut/paste from this proposal page? I brief look in SVN (<a class="external-link" href=""></a>) does not reveal the code for this proposal, even though it was granted incubator acceptance.</p>
<p>Could someone point me in the right direction for where this code is being stored please?</p>
<p>Thanks, Andrew</p>
Apr 27, 2010
Jay Klehr
<p>Andrew, It's in the incubator currently.</p>
<p><a class="external-link" href=""></a></p>
<p>Jay</p>
Apr 28, 2010
Andrew Sharpe
<p>Cheers Jay</p>
Sep 09, 2010
Markus Thielen
<p>Hi all,</p>
<p>unfortunatly i cant create an issue for this incubator component. Its not listed in the Issuetracker.</p>
<p>I use this component instead of phpCAS, and it works really well. But there is one thing i missed so far. I need to have access to the full xml-response from the server. In my case the server sends additional authentication information within the response XML. So I modified getResponseBody():</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
protected function getResponseBody($body) {
$xml = simplexml_load_string($body, 'SimpleXMLElement', 0, $this->_xmlNameSpace);
if(isset($xml->authenticationSuccess))
else {
.....
}
]]></ac:plain-text-body></ac:macro>
<p>and authenticate():</p>
<ac:macro ac:<ac:plain-text-body><![CDATA[
public function authenticate()
{
if($result = $this->validateTicket($this->getTicket(), $this->getService()))
else
}
]]></ac:plain-text-body></ac:macro>
<p>Thus the whole behavior is the same except that every node from the response is also in the $messages from Zend_Auth_Result.</p>
<p>Will you consider this change?</p>
<p>Thanks,<br />
Markus </p>
Sep 09, 2010
Greg Gomez
<p>Hi, Markus:</p>
<p>Will your changes break existing instances?</p>
<p>Thanks,<br />
Greg</p>
Sep 20, 2010
Jeremy Postlethwaite
<p>Hi Markus,</p>
<p>I am glad you are getting good use out of the adapter.</p>
<p>I committed your suggestion as it did not break backwards compatibility.</p>
<p>I am sure others may want the full response as well.</p>
<p>Thanks for your input!</p>
<p>Jeremy</p>
Sep 10, 2010
Markus Thielen
<p>Hi Greg,</p>
<p>no, i dont think so. In the end, the only difference is that Zend_Auth_Result has $messages even if the result is SUCCESS.</p>
<p>Thanks,<br />
Markus</p>
Sep 10, 2010
Greg Gomez
<p>Hi, Markus:</p>
<p>Then I'm all for it!</p>
<p>I vote yes.</p>
<p>Thanks!<br />
Greg</p>
Sep 10, 2010
Jeremy Postlethwaite
<p>I can test this out next week (Sept 13th - 17th).</p>
<p>I also need to finish the unit tests so I can get this checked in.</p>
<p>If anyone has any thoughts about the unit tests, please let me know.</p>
<p>Thanks,</p>
<p>Jeremy</p>
Sep 10, 2010
Greg Gomez
<p>Hi, Jeremy:</p>
<p>I'm a total novice with unit testing in general, so I'm probably not much help. But I'll do what I can. What do you need?</p>
<p>Thanks!<br />
Greg</p>
Sep 10, 2010
Jeremy Postlethwaite
<p>We need to have all of our test cases setup:</p>
<ul class="alternate">
<li>CAS server is unavailable</li>
<li>CAS authentication succeeded</li>
<li>CAS authentication failed</li>
<li>CAS returns bad xml</li>
<li>CAS returns wrong information</li>
<li>Any authentication tests required by Zend_Auth_Adapter</li>
</ul>
<p>The tests will eventually end up here:</p>
<p><a class="external-link" href=""></a></p>
Oct 13, 2010
Jeremy Postlethwaite
<p>This adapter is affected by the PHP bug:</p>
<p><a class="external-link" href=""></a></p>
Oct 29, 2010
Henry Umansky
<p>Jeremy,<br />
Does this bug effect PHP 5.3.2? I recently included your Cas.php code, followed your direction, but kept getting a redirect loop. So then I added Jay Klehr's suggestion, but when it returns the following error messages:</p>
<p>[0] => Authentication failed: Failed to connect to server<br />
[1] => Unable to Connect to ssl://fed.princeton.edu:443. Error #0:</p>
<p>This only happens when it is trying to validate the ticket. The class is able to redirect me successfully to my CAS server, but once I authenticate, it chokes on the ticket validation. Is there any workaround? Or am I doing something wrong. Here is my casAction() for reference:</p>
<p><a href=""></a></p>
<p>Perhaps I'm using it incorrectly.</p>
<p>Thank you,<br />
Henry</p>
Oct 29, 2010
Jeremy Postlethwaite
<p>Hi Henry,</p>
<p>I have not coded a work around yet. At my university (UC Davis), we have a test environment that is using a self-signed certificate. </p>
<p>The test environment is not affected by the PHP bug. The production server is identical with an SSL certificate from Geotrust. I get the same error:</p>
<p>Unable to Connect to ssl://...</p>
<p>There are a few recommendations in the bug report:</p>
<p><a class="external-link" href=""></a></p>
<p>I have not had a chance to check if I can pass options to Zend_Http_Client to workaround the error.</p>
<p>Someone changed the default ciphers:</p>
<p>'ciphers' => 'ALL:!AES:!3DES:!RC4:@STRENGTH', // OK:LOW</p>
<p>when using:</p>
<p>stream_context_create()</p>
<p>Zend_Http_Client is not using that method, so I am not sure if I can do something similar.</p>
Oct 29, 2010
Henry Umansky
<p>Jeremy,</p>
<p>I fixed the code. Here is the patch file. Essentially I changed the adapter from Socket to Curl and that did the trick. Perhaps we can make the adapter a Zend_Auth_Cas configuration option? Or just replace the Socket adapter overall? Will that break backwards compatibility? What are your thoughts?</p>
<p>-Henry</p>
<p>Index: library/Zend/Auth/Adapter/Cas.php<br />
===================================================================<br />
— library/Zend/Auth/Adapter/Cas.php (revision 23274)<br />
+++ library/Zend/Auth/Adapter/Cas.php (working copy)<br />
@@ -689,7 +689,10 @@<br />
require_once 'Zend/Http/Client.php';</p>
<p>try {<br />
+ $config = array(<br />
+ 'adapter' => 'Zend_Http_Client_Adapter_Curl',<br />
+ );<br />
+ $client = new Zend_Http_Client($this->getValidationURL(), $config);</p>
<p>$client->setParameterGet($this->getValidationParams($ticket, $service));</p>
Oct 29, 2010
Jeremy Postlethwaite
<p>Thanks Henry,</p>
<p>I will try this out next week. If all goes well, I will put it in the incubator.</p>
Nov 02, 2010
Jeremy Postlethwaite
<p>I am putting in code to use different client adapters with options.</p>
<p>For example, I need to specify a Certificate Authority file to be able to use cURL.</p>
Nov 03, 2010
Jeremy Postlethwaite
<p>I checked in a new revision with the ability to specify an adapter with Zend_Http_Client.</p>
<p>Here is the latest file in the incubator:</p>
<p><a class="external-link" href=""></a></p>
<p>This is revision: 23291</p>
<p>Please see:</p>
<p>Use cases: UC-01 in MVC, at the top of the page, for details on how to specify the client adapter: Zend_Http_Client_Adapter_Curl</p>
Nov 04, 2010
Henry Umansky
<p>Works like a charm!!!! For sake of yet another use case, I'll share my code below.</p>
<p>Here is my sample page:
<a class="external-link" href=""></a></p>
<p>Here is my sample config file:</p>
<p>[cas]<br />
hostname = fed.princeton.edu<br />
port = 443<br />
path = "cas"<br />
clientAdapter.adapter = "Zend_Http_Client_Adapter_Curl"</p>
<p>Finally, here is the actual call to grab the auth adapter:</p>
<p>$options = new Zend_Config_Ini(APPLICATION_PATH . '/configs/config.ini');<br />
$adapter = new Zend_Auth_Adapter_Cas($options->cas->toArray());</p>
<p>Can't get any easier, now to move the code from incubator to production so I don't have to have two separate "svn:externals" in my subversion working copy <ac:emoticon ac:</p>
<p>Thank you Jeremy!!!</p> | http://framework.zend.com/wiki/display/ZFPROP/Zend_Auth_Adapter_Cas+-+Jeremy+Postlethwaite?focusedCommentId=15565284 | CC-MAIN-2014-41 | refinedweb | 2,528 | 56.35 |
import substitution
Definition
Use import substitution in a sentence
“ You may want to make an import substitution if you think that it will help you earn more money later. ”
Was this Helpful? YES NO 7 people found this helpful.
“ The import substitution idea was an excellent one that proved to be effective because consumers preferred to buy domestic when available. ”
Was this Helpful? YES NO 8 people found this helpful.
“ I began buying more American made good after a public campaign about import substitution, and I feel great about supporting our local workers and economy. ”
Was this Helpful? YES NO 2 people found this helpful.
Show more usage examples... | http://www.investorwords.com/16474/import_substitution.html | CC-MAIN-2018-30 | refinedweb | 109 | 60.41 |
SOA Patterns - Reservation
When you use transactions in “traditional” n-tier systems life is relatively simple. For instance, when you run a transaction and an error or fault occurs you abort the transaction and easily rollback any changes – getting back your system-wide consistency and peace of mind. The reasons this is possible is that a transaction isolates changes made within it from the rest of the world. One of the base assumptions behind Transactions is that the time that elapses from the beginning of the transaction until it ends is short. Under that assumption we can afford the luxury of letting the transaction hold locks on our resources (such as databases) and mask changes from others while the transaction is in progress. Transactions provide four basic guarantees – Atomicity, Consistency, Isolation and Durability, usually remembered by their acronym - ACID.
Unfortunately, in a distributed world, SOA or otherwise, it is rarely a good idea to use atomic short lived transactions (see the Cross-Service Transactions anti-pattern in chapter 10 for more details). Indeed, the fact that cross service transactions are discourages is one of the main reasons we would to consider using the Saga pattern in the first place.
One of the obvious shortcomings of Sagas is that you cannot perform rollbacks. The two conditions mentioned above, locking and isolation do not hold anymore so you cannot provide the needed guarantee. Still, since interactions, and especially long running ones, can fail or be canceled Sagas offer the notion of Compensations. Compensations are cool; we can’t have rollbacks so instead we will reverse the interaction’s operation and have a pseudo rollback. If we added one hundred (dollars/units/whatnot) during the original activity we’ll just subtract the same 100 in the compensation. Easy, right?
Wrong – as you probably know, it isn’t easy. Unfortunately, there are a number of problems with compensations. These problems come from the fact that, unlike ACID transactions, the changes made by the Saga activities are not isolated. The lack of isolation means that other interactions with the service may operate on the data that was modified by an activity of other sagas, and render the compensation impossible. To give an extreme example, if a request to one service changes the readiness status of the space shuttle to “all-set” and another service caused the shuttle to launch based on that status, it would be a little too late for the first service to try to reverse the “all-set” status now that the “bird has left the coop”. A more down to earth (pardon the pun) business scenario is any interaction where you work with limited resources e.g. ordering from a, usually limited, stock.
Consider, for instance, the scenario in figure 6.1 below. A customer orders an item. The ordering service requests the item from the warehouse as it wants to ship the item to the customer (probably by notifying another service). Meanwhile on the warehouse service the item ordered causes a restocking threshold to be hit which triggers a restocking order from a supplier. Then the customer decides to cancel the order – now what?
Figure 6.1 Chapter 6 focus is about connecting Services with Service consumers in the levels and layers beyond the basic message exchange patterns.
Should the restocking order be cancelled as well? Can it be cancelled under the ordering terms of the supplier? Also a customer requesting the item between the ordering and cancellation might get an out of stock notice which will cause him to go to our competitors. This can be especially problematic for orders which are prone for cancellations like hotel bookings, vacations etc.
Another limitation of compensations and the Saga pattern itself, for that matter, is that it requires a coordinator. A coordinator means placing trust in an external entity, i.e., outside (most) of the services involved in the saga, to set things straight. This is a challenge for some of the SOA goals as it compromises autonomy and introduces unwanted coupling to the external coordinator.
The question then is
How can we efficiently provide a level of guarantee in a loosely coupled manner while maintaining services’ autonomy and consistency?
We already discussed the limitations of compensations, which of course is one of the options to solve this challenge. Again, one problem is that we can’t afford to make mini changes since we will then be dependent on an external party to set the record straight. The other problem with compensations is that we expose these “semi-states” – which are essentially, the internal details of the services, to the out-side world. Increasing the footprint of the services’ contract, esp. with internal detail, makes the services less flexible and more coupled to their environment (See also the white box services anti-pattern in chapter 10)
We’ve also mentioned that distributed transactions is not the answer since they both lock internal resources for too long (a Saga might go on for days..?) as well as put excess trust on external services which may be external to the organization.
This seems like a quagmire of sorts, fortunately, real life already found a way to deal with a similar need for fuzzy, half guarantees – reservations!
Implement the Reservation pattern and have the services provide a level of guarantee on internal resources for a limited time
Figure 6.2 The Reservation pattern. A service that implement reservation consider some messages as “Reserving” in which it tries to secure an internal resource and sends confirmation if it succeeds. When a message considered as “confirming” the service validate the reservation still holds. In between the service can choose to expire reservation based on internal criteria
The Reservation pattern means there will be an internal component in the service that will handle the reservations. Its responsibilities include
§ Reservation - making the reservation when a message that is deemed “reserving” arrives. For instance when an order arrives, in addition to updating some durable storage (e.g. database) on the order it needs to set a timer or an expiration time for the order confirmation alternatively it can set some marker that the order is not final.
§ Validation – making sure that a reservation is still valid before finalizing the process. In the ordering scenario mentioned before that would be making sure the items designated for the order were not given to someone else.
§ Expiration – marking invalid reservation when the conditions changed. E.g. if a VIP customer wants the item I reserved, the system can provision it for her. It should also invalidate my reservation so when I finally try to claim it the system will know it’s gone. Expiration can also be timed, as in, |we’re keeping the book for you until noon tomorrow”
Reservations can be explicit i.e. the contract would have a ReserveBook action or implicit. In case of an implicit order the service decides internally what will be considered as Reserving message and what will be considered as confirming message e.g. an action like Order, will trigger the internal reservation and an action like closing the saga will serve as the confirming message. When the reservation is implicit the service consumer implementation will probably be simpler as the consumer designers are likely to treat reservation expiration as “simple” failures whereas when it is explicit they are likely to treat the reservation state.
Reservations happen in business transactions world-wide every day. The most obvious example is making a ordering a flight. You send in a request for a room (initiate a saga) saying you’d arrive on a certain date, say for a conference, and check out on another (complete the saga). The hotel says ok, we have a room for you (reservation) – provided you confirm your arrival by a set-date (limited time). Even if everything went well, you may still arrive at the hotel, only to find out your room has been given to another person (limited guarantee). The idea of the reservation pattern is to copy this behavior to the interaction of services so that services that support reservations offer a sort of “limited lock” for a limited time and with a limited level of guarantee. Limited level of guarantee, means that like real life, services can overbook and then resolve that overbooking by various strategies such as fist come, first served; VIP first served etc
It is easy to see Reservation applied to services that handle “real-life” reservations as part of their business logic, such as a ordering service for hotels (used in the example above) or an airline etc., However reservations are suitable for a lot of other scenarios where services are called to provide guarantees on internal resources. For instance, in one system I built we used reservations as part of the saga initiation process. The system uses the Service Instance pattern (see chapter 3) where some services are stateful (the reasons are beyond the scope of this discussion). Naturally, services have limited capacity to handle consumers (i.e. an instance can handle n-number of concurrent sagas/events).
This means that when a saga initialized all the participants of the saga needs to know the instances that are part of the saga. As long as a single service instance initiates sagas everything is fine. However, as illustrated in figure 6.3 below, when two or more services (or instances) initiate sagas concurrently they may (and given enough load/time they will) both try to allocate the same service instance to their relative sagas. In the illustration we see that both Initiator A and Initiator B want to use Participant A and Participant B. Participant A has a capacity of 2 so everything is fine for both Initiators. Service B, however, has limited capacity so at least one of the Sagas will have to fail the allocation, i.e. not start.
Figure 6.3 : Sample for a situation that can benefit from the reservation pattern
The reservation pattern enabled us to manage this resource allocation process in an orderly manner by implementing a two pass protocol (somewhat similar to a two phase commit). The initiator asks each potential participant to reserve itself for the saga. Each participant tries to reserve itself and notify back if it is successful – so in the above scenario, A would say yes to both and B would say yes to one of them. If the initiator gets an OK from all the involved services (within a timeout) it will tell all the participants the specific instances within the saga (i.e. initiate it).
The participants only reserve themselves for a short period of time. Once an internally set timeout elapse the participants remove the commitment independently. As a side note, I’ll just say that the initiator and other saga members can’t assume that the participant will be there just because they are “officially” part of the saga and the system still needs to handle the various failure scenarios. The Reservation pattern is used here only to help prevent over allocation and it does not provide any transactional guarantees.
A reservation is somewhat like a lock and thus it “somewhat” introduce some of the risks distributed locks presents. These risks aren’t inherent in the pattern but can easily surface if you don’t pay attention during implementation (e.g. using database locks for implementation).
The first risk worth discussing is deadlock. Whenever you start reserving anything, esp. in a distributed environment you introduce the potential for deadlocks. For instance if both participants had a capacity for single saga, initiator A contacts participant A first and participant B next and initiator B used the reverse order – we would have had a deadlock potential. In this case there are several mechanisms that prevent that deadlock. The first is inherent to the Reservation pattern, where the participants release the “lock” themselves. However, for example, if there is a retry mechanism to initiate the sagas (as both would fail after the timeout) and the same resources will be allocated over and over there may be a deadlock after all
Another risk to watch out from when implementing Reservations is Denial of Service (whether maliciously or as an byproduct of misuse). DoS can happen from similar reasons discussed in the deadlock (i.e. if you incur a deadlock you also have a DoS). Another way is via exploiting the reservations by constantly re-reserving. Depending on the reservation time-out, regular firewalls might fail detecting the DoS so you may want to consider using a Service Firewall (chapter 4) to help mitigate this thread.
Besides the risks discussed above, another thing to pay attention to is that when you introduce Reservation, you are likely to add additional network calls. The system discussed above mention that when it introduce another call tell the Saga members which instances are involved in the saga.
In addition to the Service Firewall pattern, mentioned above, another pattern related to Reservations can be the Active Service pattern (see chapter 2). The Active Service pattern can be used to handle reservation expiration when implemented by timed. Note however, that sometimes better, resource-wise, to handle expiration passively and not actively as we’ll see looking at s implementation options in the next section.
Unlike a lot of the patterns in this book, the Reservation pattern is more a business pattern than a technological one. This means there isn’t a straight one-to-one technology mapping to make it happen. On the other hand, code-wise, the pattern is relatively easy to implement.
One thing you have to do is to keep a live thread at the service to make sure that when the lease or reservation expires someone will be there to clean up. One option is the Active Service pattern mentioned above. You can use technologies that support timed events provide the “wakeup service” for you. For instance if you are running in an EJB 3.0 server you can use single action timers i.e. timers that only raise their event once to accomplish this. Code listing 6.1 below shows a simple code excerpt to set a timer to go off based on time received in the message. Other technologies provide similar mechanism to accomplish the same effect.
Code Listing 6.1 setting a timer event for a timer based on a message to set the timer (using JBOSS )
public class TimerMessage implements MessageListener {
@Resource
private MessageDrivenContext mdc;
.
.
.
public void onMessage(Message message) {
ObjectMessage msg = null;
try { #1
if (message instanceof ObjectMessage) {
msg = (ObjectMessage) message;
TimerDetailsEntity e = (TimerDetailsEntity) msg.getObject();
TimerService timerService = messageDrivenCtx.getTimerService();
// Timer createTimer(Date expiration, Serializable info) #2
Timer timer = timerService.createTimer(e.Date, e);
}
} catch (JMSException e) {
e.printStackTrace();
mdc.setRollbackOnly();
} catch (Throwable te) {
te.printStackTrace();
}
}
.
.
.
(Annotation) <#1 some vanilla code to process a message and get the interesting entity out of it >
(Annotation) <#2 Here is where we set the single action timer based on the info in the message we’ve just got>
Timer based cancellation, as described above, might be an overkill if the reservation implementation is simple. For instance the Reservation in listing 6.2 below (implemented in C#) is used by the participants discussed in the Saga and reservation sample discussed in the previous section.
Code Listing 6.2 Simple in-memory, non-persistent reservation
public Guid Reserve(Guid sagaId)
{
try
{
Rwl.TryWLock();
var isReserverd = Allocator.TryPinResource(localUri, sagaId);
if (!isReserverd) #1
return Guid.Empty;
//Some code to set the expiration #2
return sagaId; #3
}
finally
{
Rwl.ExitWLock();
}
}
(Annotation) <#1 The allocator is a resource allocation control, which manages, among other things, the capacity of the service. If we didn’t succeed in marking the service as belonging to the Saga, we can’t allocate the service to the specific Saga>
(Annotation) <#2 Here is where we need to add code to mark when the reservation expired, the previous example (6.1) used timers , we’ll try to do something different here>
(Annotation) <#3 successful reservation returns the SagaId this assures the caller that the reply it got is related to the request it sent – a simple Boolean might be confusing >
Since the Reservation in listing 6.2 does not involve heavy service resources (like, say, a database etc.), we can implement a passive handling of reservation expiration, which will be more efficient than a timer based one. Listing 6.3 below shows both a revised reservation implementation which removes timeout reservation before it commits. Note that an expired reservation can still be committed if no other reservation occurred in between or the capacity of the service is not exceeded.
Code Listing 6.3 passive reservation expiration handling (added on top of the code from listing 6.2)
public Guid Reserve(Guid sagaId)
{
try
{
Rwl.TryWLock();
RemoveExpiredReservations(); #1
var isReserverd = Allocator.TryPinResource(localUri, sagaId);
if (!isReserverd)
return Guid.Empty;
OpenReservations[sagaId] = DateTimeOffset.Now + MAX_RESERVERVATION; #2
return sagaId;
}
finally
{
Rwl.ExitWLock();
}
}
private void RemoveExpiredReservations()
{
var reftime = DateTimeOffset.Now;
var ids = from item in OpenReservations where item.Value < reftime select item.Key;
if (ids.Count() == 0) return;
var keys=ids.ToArray();
foreach (var id in keys)
{
OpenReservations.Remove(id);
Allocator.FreePinnedResources(id);
}
}
(Annotation) <#1 Added a small method (RemoveExpiredReservations which also appears in the listing) to clean expired reservations. This method is ran everytime the service needs to handle a new reservation request and it cleans up expired reservations. Note that there is no timer involved, reservation are only cleaned if there is a new reservation to process>
(Annotation) <#2 Instead of a timer the reservation is done by marking down when the reservation will expire>
The code samples above show that implementing Reservation can be simple. This doesn’t mean that other implementations can’t be more complex. For example if you want/need to persist the reservation or distribute a reservation between multiple service instances etc., but at its core it shouldn’t be a heavy or complex process.
Another implementation aspect is whether reservations are explicit or implicit. Explicit reservation means there will be a distinct “Reserve” message. This usually means there will also be a “Commit” type message and that the service or workflow engine that request the Reservation might find itself implementing a 2-phase commit type protocol, which isn’t very pleasant, to say the least.
The other alternative is implicit where the service decides internally when to reserve and what conditions to commit the reservation and when to reject it. As usual the tradeoff is between simple implementation to the service and simple implementation for the service consumer
As usual, we wrap up pattern by taking a brief look at some business drives (or scenarios) that can drive us to use the reservation pattern.
In essence, the main drive to reservation is the need for commitment from resources and since it is a complementary pattern to Sagas it also has similar quality attributes. As mentioned above Reservation helps provide partial guarantees in long running interactions thus the quality attribute that point us toward it is Integrity.
Table 6.2 Reservation pattern quality attributes scenarios. These are the architectural scenarios that can make us think about using the Decoupled Invocation pattern. | http://www.drdobbs.com/windows/soa-patterns-reservation/228701670 | CC-MAIN-2015-18 | refinedweb | 3,199 | 53 |
Board index » Asm
All times are UTC
I'm quite shaky with assembly language and have to code a bubble sort procedure. Any help would be appreciated. Schlotto 12/3/96 5:45 PM
as you know in bubble sort after each pass you can be sure that the greatest item is in the end of the array here you have a proc that sort i assumed you pass 2 parameters first is the number of items(int=word) to sort ,the other is the address of the array
bub_sort proc near mov bp,sp mov cx,[bp+4]
loopout: mov si,[bp+2] mov bx,0 ;bx is a flag if bx='f' no change was made during the pass push cx loopin: mov ax,[si] cmp ax,[si+2] jng cont mov bx,'t' xchg ax,[si+2] mov [si],ax cont: inc si inc si loop loopin cmp bx,'f' pop cx loopne loopout ret 4 bub_sort endp
pay attention if ax,bx,cx,bp,si or flags are important for you make sure to save them before they are corrupted;
regards and don't be shaky assembler is just a language not a woman!
I'm quite shaky with assembly language and have to code a bubble sort procedure. Any help would be appreciated. Schlotto 12/3/96 5:45 PM ------------------------------------------------ Date: 12/06/96 *** PLEASE ALWAYS INCLUDE ORIGINALS IN REPLY *** Time: 18:14:07 ------------------------------------------------
This message was sent by Chameleon's Zmail Pro **********************************************
1. Asm beginner-NEED HELP
2. Beginner needs help with namespaces
3. Beginner needs help!
4. beginner needs help
5. BEGINNER NEED HELP
6. Beginner needs help printing dictionary
7. Beginner needs help with Clarion for Win 2.0
8. Beginner needs help
9. Beginner needs help with Clipper->Windows 3.1 convertion
10. beginners need help
11. beginner needing help
12. Beginner needs help. | http://computer-programming-forum.com/45-asm/3c7b769033ac9c17.htm | CC-MAIN-2021-04 | refinedweb | 313 | 73.78 |
logb, logbf, logbl — get exponent of a floating-point value
Synopsis
#include <math.h>
double logb(double x);
float logbf(float x);
long double logbl(long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
logb():
_ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L
|| _XOPEN_SOURCE >= 500
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
logbf(), logbl():
_ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
For an explanation of the terms used in this section, see attributes(7).
Conforming to
C99, POSIX.1-2001, POSIX.1-2008.
See Also
ilogb(3), log(3)
Colophon
This page is part of release 5.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
ilogb(3), remquo(3).
The man pages logbf(3) and logbl(3) are aliases of logb(3). | https://dashdash.io/3/logbf | CC-MAIN-2021-43 | refinedweb | 155 | 61.22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.