text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
.... Where jn students ka aaj paper tha...kindly kch guide he kr dy question se related,.. Please share your current paper of cs411 Here are the past paper attachments. My today paper: StakPanelstackpanel = new StakePanel Button button = new system.window.control Button; button.widh = 50; button.Hight = 70 button.HOrizontelAlligenemtn = right; stakepanel.child.add(button); Std::ifstream f("test.txt", std::ifstream::in); If (f.good() && a != f.peek()) { A = f.get(); You have to describe these function working mCQs past pappers sy ay thy? dear looking forward for your swift response. There were a total of 23 Question 18 MCQs 2 Questions of 3 Marks 3 Questions of 5 Marks Write 2 differences between Interface and class of C# (3 M) Name the three (3) routing strategies that can be chosen by each routed event when it is registered? (3 M) Write 3 difference b/w Margin and padding (5 M) Two question were of coding sy thy dono main yeh batana tha keh code sahi hy to output likhain aour agar code ghalat hy to error batien aour correct code likhan Note:- Aik question coding wala is code main tha lakin yeh confirm nhi hy keh code yehi likha hua tha ya nhi using System; class Program { static void Main() { while (true) // Loop indefinitely { Console.WriteLine("Enter input:"); // Prompt string line = Console.ReadLine(); // Get string from user if (line == "exit") // Check string { break; } // Write the string to a file. { System.IO.StreamWriter file = new System.IO.StreamWriter("c:\\student.txt"); file.WriteLine(lines); } file.Close(); 2nd question main koi numbers ki power wala question tha jis main kis number ki power 0 ho to answer 1 aata hy Lakin code yad nhi bro mCQs past pappers sy ay thy? dear looking forward for your swift response. © 2018 Created by + M.TariK MaliC. Promote Us | Report an Issue | Privacy Policy | Terms of Service
http://vustudents.ning.com/group/cs411-visual-programming/forum/topics/cs411-current-mid-term-papers-past-mid-term-papers-at-one-place?commentId=3783342%3AComment%3A6094622&groupId=3783342%3AGroup%3A4194109
CC-MAIN-2018-34
refinedweb
314
66.94
_LWP_WAIT(2) System Calls Manual _LWP_WAIT(2) NAME _lwp_wait -- wait for light-weight process termination LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <<lwp.h>> int _lwp_wait(lwpid_t wlwp, lwpid_t *rlwp); DESCRIPTION _lwp_wait() suspends execution of the calling LWP until the LWP specified by wlwp terminates. The specified LWP must not be detached. If wlwp is 0, then _lwp_wait() waits for any undetached LWP in the current process. If rlwp is not NULL, then it points to the location where the LWP ID of the exited LWP is stored. RETURN VALUES Upon successful completion, _lwp_wait() returns a value of 0. Otherwise, an error code is returned to indicate the error. ERRORS _lwp_wait() will fail if: [ESRCH] No undetached LWP can be found in the current process corresponding to that specified by wlwp. [EDEADLK] The calling LWP is the only LWP in the process. [EDEADLK] The LWP ID specified by wlwp is the LWP ID of the calling LWP. [EINTR] _lwp_wait() was interrupted by a caught signal, or the signal did not have the SA_RESTART flag set. SEE ALSO _lwp_create(2), _lwp_exit(2) HISTORY The _lwp_wait() system call first appeared in NetBSD 2.0. NetBSD 6.1.5 January 13, 2003 NetBSD 6.1.5
http://modman.unixdev.net/?sektion=2&page=_lwp_wait&manpath=NetBSD-6.1.5
CC-MAIN-2017-30
refinedweb
204
56.45
the website the code i came up with: - Code: Select all import lxml.html html = lxml.html.parse('') users = html.xpath('//div[@class="tweet"]/a[@class="tweet-user-link"]') for user in users: print(user.text_content()) the html (i think) - Code: Select all <div class="tweet-text"> <a class="name" href=""><span>@</span>MikeJeffJordan</a><br> My local grocery store has a special deal going on at the self scan aisle, buy one get like 30 free. </div> I am not sure how to aquire the text that the user says? Also how would you acuire not just the first initial post, but the first few posts (without re-parsing the website 3 times for the first posts) I have tried xpath strings: - Code: Select all //div[@class="tweet-text"] - Code: Select all //div[@class="tweet"]/div[@class="tweet-text"] - Code: Select all //div[@class="tweet"]/[@class="tweet-text"] - Code: Select all //div[@class="tweet"]/a[@class="tweet-user-link"] - Code: Select all //div[@class="tweet"] with the last one being the only way to grab the actual content, but i can only grab the username and the content not just hte content
http://python-forum.org/viewtopic.php?f=6&t=11186&p=15284
CC-MAIN-2016-44
refinedweb
193
50.16
The Python reduce function is one of those topics you encounter the more you code in Python. It might sound complex, is that really the case? The Python reduce function is used to apply a given function to the items of an iterable and it returns a single value. The function is applied to two items at the time from left to right until all the items of the iterable are processed. We will work with a few examples that use the reduce function to make sure you understand how to use it. Let’s start coding! How Does Reduce Work in Python? The Python reduce() function is part of the functools module and takes as arguments a function and an iterable. functools.reduce(function, iterable) The reduce operation doesn’t return multiple values, it just returns a single value. The reduce function reduces an iterable to a single value. Here are the steps reduce follows to generate its result: - It applies the function to the first two elements of the iterable and it generates a result. - The function is then applied to the result generated at step 1 and to the next item in the iterable. - The process continues until all the items in the iterable are processed. - The final result is returned by the reduce function. Let’s define a custom function that calculates the sum of two numbers: def sum(x,y): return x+y Then we import the reduce function from the functools module, apply the function to a list of numbers and print the result. from functools import reduce def calculate_sum(x,y): return x+y numbers = [1, 3, 5, 7] result = reduce(calculate_sum, numbers) print("The result is {}".format(result)) Note: by using from … import we only import the reduce function from functools instead of importing the entire functools module. When you execute this code you get the following result (I’m using Python 3): $ python reduce.py The result is 16 So, given a list of numbers we get back as result the total sum of the numbers. To make sure it’s clear how reduce behaves below you can see how the sum is calculated: (((1 + 3) + 5) + 7) => 16 How Can You Use Reduce With a Lambda? In the previous section we have defined a function that calculates the sum of two numbers and then we have passed that function to the reduce function. We can also replace the calculate_sum function with a lambda function. lambda x, y : x + y If we pass two numbers to this lambda in the Python shell we get back their sum: >>> (lambda x, y: x + y)(1, 2) 3 And now let’s pass this lambda as first parameter of the reduce function… from functools import reduce numbers = [1, 3, 5, 7] result = reduce(lambda x, y: x + y, numbers) print("The result is {}".format(result)) The output returned by reduce is: $ python reduce.py The result is 16 Exactly the same output we have got back when using the custom calculate_sum function. Applying Python Reduce to an Empty List Let’s find out what result we get when we pass an empty list to the reduce function. As first argument we will keep the lambda used in the previous section. result = reduce(lambda x, y: x + y, []) print("The result is {}".format(result)) [output] Traceback (most recent call last): File "reduce.py", line 3, in <module> result = reduce(lambda x, y: x + y, []) TypeError: reduce() of empty sequence with no initial value We get back a TypeError exception that complains about the fact that there is no initial value. What does it mean exactly? If you look at the Python documentation for the reduce function you will see that this function also supports an optional third argument, an initializer. functools.reduce(function, iterable[, initializer]) The initilizer, if present, is placed before the items of the iterable in the calculation and it’s used as default value in case the iterable is empty. Update the code to pass an initilizer equal to 10. result = reduce(lambda x, y: x + y, [], 10) print("The result is {}".format(result)) [output] The result is 10 This time the reduce function does not raise a TypeError exception. Instead it returns the value of the initializer. Before continuing verify the output from the reduce function when the initializer is present and the list is not empty: result = reduce(lambda x, y: x + y, [1, 2], 10) What do you get back? Is the result what you expected? Why Are You Getting the Python Error “reduce” is Not Defined? If you are running a program that calls the reduce() function without importing it from functools you will get the following NameError exception: Traceback (most recent call last): File "reduce.py", line 7, in <module> result = reduce(sum, numbers) NameError: name 'reduce' is not defined The fix is simple, just add an import statement at the top of your Python program as shown before: from functools import reduce Difference Between Map and Reduce Another function that gets often mentioned together with reduce is the map function. The main difference between map and reduce is that map() is applied to every item of an iterable one at the time and it returns an iterator. Let’s see what happens if we pass the lambda defined before to the map function. >>> result = map(lambda x, y: x + y, [1, 2]) >>> print(list(result)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: <lambda>() missing 1 required positional argument: 'y' The Python interpreter raises a TypeError because the map function passes only one value to the lambda function. Update the lambda function by removing y and by returning x multiplied by 2: >>> result = map(lambda x: 2*x, [1, 2]) >>> print(type(result)) <class 'map'> >>> print(list(result)) [2, 4] This time the map function works as expected. Notice that we have used the list() function to convert the map object returned by the map function into a list. Reduce vs Python For Loop I wonder how we can use a Python for loop to write a program that returns the same result as the reduce function. We set the value of result to 0 and then add each item in the list to it. numbers = [1, 3, 5, 7] result = 0 for number in numbers: result += number print("The result is {}".format(result)) As you can see we need few lines of code to do what reduce does in a single line. Reduce vs Python List Comprehension There is a conceptual difference between the reduce function and a list comprehension. Reduce starts from a Python list and returns a single value while a list comprehension applied to a list returns another list. But there are some scenarios in which you can use a list comprehension and the reduce function in a similar way, for example to flatten a list of lists. Given the following list of lists: >>> numbers = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] I want to use reduce to flatten it. >>> from functools import reduce >>> result = reduce(lambda x, y: x + y, numbers) >>> print(result) [1, 2, 3, 4, 5, 6, 7, 8, 9] As you can see we have converted the list of lists into a simple list that contains all the numbers. Now let’s write the code to do this with a list comprehension. >>> [item for number_group in numbers for item in number_group] [1, 2, 3, 4, 5, 6, 7, 8, 9] The result is the same. Which approach do you prefer? Reduce vs Itertools.accumulate The itertools module implements a function called accumulate. How does it compare to the reduce function? Firstly its syntax is different: itertools.accumulate(iterable[, func, *, initial=None]) It accepts an iterable as first argument and an optional function as second argument. Let’s apply it to our original list of numbers to see what happens… >>> from itertools import accumulate >>> numbers = [1, 3, 5, 7] >>> print(type(accumulate(numbers))) <class 'itertools.accumulate'> >>> print(list(accumulate(numbers))) [1, 4, 9, 16] The accumulate function creates an iterator that returns accumulated sums. So the behaviour is different from the reduce function that just returns a single value. Conclusion You have reached the end of this tutorial and by now you have all the knowledge you need to use the Python reduce function. You know how to use it by passing a custom function or a lambda function to it. We have also looked at how reduce compares to map and how you can write code that implements a logic similar to reduce using a for loop or a list comprehension (only in some cases). I’m a Tech Lead, Software Engineer and Programming Coach. I want to help you in your journey to become a Super Developer!
https://codefather.tech/blog/python-reduce/
CC-MAIN-2021-31
refinedweb
1,476
60.04
NAME ossl_store-file - The store 'file' scheme loader SYNOPSIS #include <openssl/store.h> DESCRIPTION Support for the 'file' scheme is built into libcrypto. Since files come in all kinds of formats and content types, the 'file' scheme has its own layer of functionality called "file handlers", which are used to try to decode diverse types of file contents. In case a file is formatted as PEM, each called file handler receives the PEM name (everything following any ' -----BEGIN ') as well as possible PEM headers, together with the decoded PEM body. Since PEM formatted files can contain more than one object, the file handlers are called upon for each such object. If the file isn't determined to be formatted as PEM, the content is loaded in raw form in its entirety and passed to the available file handlers as is, with no PEM name or headers. Each file handler is expected to handle PEM and non-PEM content as appropriate. Some may refuse non-PEM content for the sake of determinism (for example, there are keys out in the wild that are represented as an ASN.1 OCTET STRING. In raw form, it's not easily possible to distinguish those from any other data coming as an ASN.1 OCTET STRING, so such keys would naturally be accepted as PEM files only). NOTES When needed, the 'file' scheme loader will require a pass phrase by using the UI_METHOD that was passed via OSSL_STORE_open(). This pass phrase is expected to be UTF-8 encoded, anything else will give an undefined result. The files made accessible through this loader are expected to be standard compliant with regards to pass phrase encoding. Files that aren't should be re-generated with a correctly encoded pass phrase. See passphrase-encoding(7) for more information. SEE ALSO ossl_store(7), passphrase-encoding(7) Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at.
https://www.openssl.org/docs/man1.1.1/man7/ossl_store-file.html
CC-MAIN-2022-33
refinedweb
341
62.58
Invalid groups Hi, I’m very new to robofont so please forgive me if this question comes across silly. I’m exporting some fonts from fontlab to UFO and I keep getting the following warning upon opening in robofont. (screen grab attached) The 'tags I can only see from the output window, the groups and glyph names don’t seem to contain the 'tags, from what I can see. I can’t find where to remove the 'tags. Have other people had this issue? How did they resolve it? Best, —Bobby - marksimonson last edited by gferreira An easy way I've found is to go to the Finder, right-click or control-click on the UFO and select "Show Package Contents". You will see a file called groups.plist. Open that in a plain text editor (like BBEdit or TextMate) and then search for "'"(the apostrophe) and replace with ""(nothing). Save the file and reopen the UFO in RoboFont. If the UFO was already open in RoboFont, it will put up a message about the UFO having been changed outside of RoboFont, which you can confirm. (Edit: Those should be "dumb" quotes above, not smart quotes. The forum software seems to be "fixing" them automatically.) or easier, make a script that is removing all ': font = CurrentFont() for groupName, items in font.groups.items(): newItems = [] for glyphName in items: if "'" in glyphName: glyphName = glyphName.replace("'", "") newItems.append(glyphName) if newItems != items: ## something changed font.groups[groupName] = newItems and connect this script with event observer on fontDidOpenand it happens automatically. see (RoboFont will never do this automatically cause one of the main principles is that the users should always be aware of what is happening with his/her font data) Thank you both for your replies. I tried both methods and they worked perfectly. I couldn’t quite get the event observer to work properly as I’m pretty terrible at scripting. Thanks again. —B that is easy too :) from mojo.events import addObserver class RenameGroupsObserver(object): def __init__(self): addObserver(self, "checkGroupNames", "fontDidOpen") def checkGroupNames(self, info): print "checking group names for single quotes" font = info["font"] for groupName, items in font.groups.items(): newItems = [] for glyphName in items: if "'" in glyphName: glyphName = glyphName.replace("'", "") newItems.append(glyphName) if newItems != items: ## something changed font.groups[groupName] = newItems print "done checking group names" RenameGroupsObserver() save this script in a file and add it in the preferences as a start up script and you never ever have to worry about single quotes in groups. see Your font data will change and importing the UFO back in FL will change the groups behavior, off course good luck
https://forum.robofont.com/topic/211/invalid-groups/3?lang=en-US
CC-MAIN-2021-10
refinedweb
443
73.47
Does any one get a doubt how a librarian maintains the list of books available in the library?Today we shall get in detail about this and compile a program to maintain a book shop.First of all we need to know what are the inputs,methods do be coded in the program. 1. INPUTS: Title of the book.Author of the book are taken from the user who requires the book. 2. Variables Need to be stored: Author,Title,Publisher,Cost,number of books available.These all are the attributes of the book which is stored. 3. Methods: In this program we need three methods.One for the input of the list of all the books present in the library.Two,search method to check whether the required book is available or not.Third,display function to display the search output. Explanation:We shall now go in detail about these methods. 1. In the first method we shall enter details of the books available in the library…like title of the book,author of the book,publisher of the book,price of the book,number of books present in the library. 2. The second method asks the user to enter the title,author of the book they required. 3. The third method is used to display the output of the search result. We shall now write the code from the above explanation: import java.util.*; class bookshopmethods { Scanner in = new Scanner(System.in); int n=2; String[] title = new String[n]; String[] author = new String[n]; String[] publisher = new String[n]; int[] cost = new int[n]; int[] count = new int[n]; void setdata() { for(int i=0;i < n;i++) { System.out.println("Enter the title of the " + (i+1) +" book:"); title[i] = in.nextLine(); System.out.println("Enter the author of the " + (i+1) +" book:"); author[i] = in.nextLine(); System.out.println("Enter the publisher of the " + (i+1) +" book:"); publisher[i] = in.nextLine(); System.out.println("Enter the cost of the " + (i+1) + " book:"); cost[i]=in.nextInt(); System.out.println("Enter the number of books present in the Book shop:"); count[i]=in.nextInt(); in.nextLine(); }//end of for loop }//end of setdata function void search() { int i; System.out.println("***********Books present in the Book shop are*************"); for(i=0;i<n;i++) { System.out.println((i+1) + ". Title of the book:" +title[i]); } System.out.println("Enter the title of the book needed:"); String titkey; titkey = in.nextLine(); int flag=0; for(i=0;i<n;i++) { if(titkey.equals(title[i])) { flag++; break; }// end of if condition }// end of for loop if(flag != 0) { System.out.println("******Details of the book you needed is*****"); System.out.println("Title of the book:"+title[i]); System.out.println("Author of the book:"+author[i]); System.out.println("Publisher of the book:"+publisher[i]); System.out.println("Cost of the book:"+cost[i]); System.out.println("Number of books presernt in the Book shop:"+count[i]); }//end of if condition else System.out.println("Book not present in the store"); }//end of search function }//end of class class bookshop { public static void main(String arg[]) { int ch; bookshopmethods ob = new bookshopmethods(); Scanner in = new Scanner(System.in); while(true) { System.out.println("********************MENU IS****************"); System.out.println("\t\t1.To enter the data\n\t\t2.To search a book\n\t\t3.Exit"); System.out.println("Enter your choice from the above menu:"); ch = in.nextInt(); switch(ch) { case 1: ob.setdata(); break; case 2: ob.search(); break; case 3: System.exit(0); }//end of switch case }// end of while loop }//end of main function }// end of class Output: You can download the source code:download thank u very much sir, this is very useful
https://letusprogram.com/2013/09/11/java-program-to-maintain-a-book-shop/
CC-MAIN-2021-17
refinedweb
630
60.72
(28) Raj Kumar(15) Bechir Bejaoui(13) Purushottam Rathore(5) Dinesh Beniwal(5) Chris Rausch(3) Munir Shaikh(3) Praveen Moosad(3) Shivprasad (3) Sateesh Arveti(3) Doug Cook(2) Rahul Saxena(2) Amit Tiwari(2) Dan Clark(1) raviraj_bh (1) Nschan (1) Mike Gold(1) Justin Finch(1) Matthew Cochran(1) Phil Curnow(1) Mamta M(1) Dhananjay Kumar (1) Oleg Polikarpotchkin(1) Leung Yat Chun(1) Radhika Vemura(1) Scott Lysle. WPF Application for Importing ESRI Shapefiles Feb 26, 2007. A standalone application for reading ESRI shapefiles and displaying them on a WPF Canvas... A glance at .NET Framework 3.0 Jul 17, 2007. This article glances on what is new in .NET Framework 3.0 and how this version is different from all previous versions of .NET? Data Binding in WPF Windows Application Aug 23, 2007. In this tutorial I will discuss on how to bind Data with WPF windows application. WPF ListBox Aug 31, 2007. This tutorial shows you how to work with the ListBox control available in WPF.. WPF Database Communication Adding new Record to the Database Sep 06, 2007. WPF Database communication Adding new record to the database (XAML-MS-Access), I have a XAML form with FirstName, LastName, Email & Contact fields in it on click of submit button it will first check if the same email address is exists in the table else it will add new record I have made use of OleDbTransaction to perform add functionality.. Expander Control in WPF Oct 02, 2007. This article and source code show how to use Expander control in WPF. Working with 2D Graphics in WPF Oct 03, 2007. This article provides an introduction of 2D drawings in WPF. Data Binding in WPF ListView Oct 05, 2007. This article shows how to extract data from a database and show in a WPF ListView control. Implementing a Simple Silverlight Control Oct 30, 2007. This article shows how to implement a simple button control using Silverlight 1.1 and C#. Chapter 2: Programming WPF Applications Jan 09, 2008. This article describes the framework that WPF offers and also tells the differences between Browser based and Windows based installed applications.). Your first animations using XAML and Silverlight- Double animation: Part II Jul 30, 2008. In the previous article "Your first animations using xaml and silverlight - Color animation: Part I", we've exposed a technique of how to deal with color animation. In this article, I will do same thing but with a different animation. I mean the DoubleAnimation class this time. Brushes in WPF Jul 30, 2008. This article discusses types of brushes found in WPF and how to use them in your applications. Drawing Shapes in WPF Jul 31, 2008. This article is an introduction to graphics programming in XAML and WPF. In this article, I discuss various graphics objects including lines, rectangles, ellipses, and paths and how to draw them using XAML and. Working with WPF Table using XAML - Part I Aug 12, 2008. In this article, I will use WPF table in XAML format. This one defines a flexible grid area that contains rows and columns. At the contrast of the Grid object witch is defined in the System.Windows.Controls, the table object is defined in the System.Windows.Documents namespace. How to Define and Configure a Grid Control Within a WPF Aplication Using XAML: Part I Aug 13, 2008. In this article, I will try to make a representation of the Grid object witch is directly derived from the Panel abstract class and we can say that is a flexible area that contains rows and columns, it plays a role of container in a given WPF window.. ControlTemplate in WPF Aug 20, 2008. In this article I will show you how to use ControlTemplate to controls in WPF(Windows Presentation Foundation). UniformGrid in WPF Aug 20, 2008. This article show how to use UniformGrid in WPF(Windows Presentation Foundation). WPF TextBox Aug 21, 2008. This article shows how to use and work with the TextBox control available in WPF and XAML. Working With a ScrollViewer Control in a WPF Application Aug 21, 2008. The ScrollViewer is an object that represents a scrollable area that contains other visible controls, it could be found within the System.Windows.Controls. At the contrast of a ScrollBar object, the ScrollViewer is a WPF new feature. So let’s discover its principal characteristics through this article. Working with. Working with WPF Table using C# - Part II Aug 26, 2008. In the previous article, Working with WPF Table using XAML - Part I, we discovered how to perform this task using XAML. Now, we will see how to do the same using C#. ListBox in WPF Aug 26, 2008. This tutorial shows you how to create and use a ListBox control in WPF and XAML. The tutorial also covers styling and formatting, add images, checkboxes, and data binding in a ListBox contrtol. RadioButton in WPF Aug 27, 2008. This tutorial shows you how to create and use a RadioButton control available in Windows Presentation Foundation (WPF) and XAML.. Working with Silverlight Toolkit: Part II Nov 25, 2008. The Silverlight Toolkit provides a variety of rich controls to supplement the existing Silverlight controls. This article describes how to add the Toolkit controls to your Toolbox and make use of the AutoCompleteBox control... TextBlock in WPF Jan 06, 2009. In this article, I discussed how we can create and format text in a TextBlock control available in WPF and C#. ListBox in WPF Jan 08, 2009. This article demonstrates how to create and use a ListBox control in WPF. Agile Development: Part II Jan 09, 2009. This article is a quick FAQ of Agile. By reading this you will understand fundamentals of Agile and different ways of implementing Agile. WPF ComboBox at a Glance - Overview Jan 13, 2009. This is a simple article to show the data into Combobox usinf WPF, you can use multiple columns in a combobox. Visual Studio Add-ins Mar 03, 2009. This article explains about Visual Studio 2008 Add-ins. Introduction to Interfacing Win Forms with VS Add-ins Mar 04, 2009. This article explains about integration of Windows Forms with Visual Studio Add-ins. Introduction to "Acropolis" Mar 12, 2009. This article explains about Microsoft code-named Acropolis for creating windows client applications. Windows Presentation Foundation (WPF) Beginners FAQ Mar 13, 2009.. Implementing Inheritance (Base-Class/Derived-Class) model in WPF. Mar 17, 2009. This article shows how to implement inheritance modal in WPF and a practical scenario to get a feel of. WPF Practical Solutions Apr 06, 2009. This document describes step by step solution to some of general problems developer face in wpf application. About AutoCompleteBox-in-WP.
http://www.c-sharpcorner.com/tags/AutoCompleteBox-in-WPF
CC-MAIN-2016-30
refinedweb
1,124
66.94
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Rounding Figure of Invoice Amount Total. HI, I Make Invoice Amount Total Rounding Code for invoice, Untaxed Amount :2240.80 Tax Amount : 148.45 Amount Total : 2389.00 instead of 2389.25 if last decimal amount is greater 0.50 it add next value if less 0.50 then it's subtract that part. I have Created This Code. Just goto the Account > Account_invoice.py find def _compute_amount(self): and put below code like, def _compute_amount(self): self.amount_untaxed = sum(line.price_subtotal for line in self.invoice_line) self.amount_tax = sum(line.amount for line in self.tax_line) self.amount_total = self.amount_untaxed + self.amount_tax if int(self.amount_total) > 0: amounttotaldiff = self.amount_total - int(self.amount_total) taxdetail = self.amount_tax - int(self.amount_tax) c = 1 - amounttotaldiff e = 1 - taxdetail if (amounttotaldiff < 0.50): d = self.amount_total - amounttotaldiff if (amounttotaldiff > 0.50): d = self.amount_total + c self.amount_total = you case when you validate the invoice are you getting the new amount or the old one because I did some change on the amount_total and when I validate I'm getting the old amount also when I need to pay that invoice it will be false about credit and debit
https://www.odoo.com/forum/help-1/question/rounding-figure-of-invoice-amount-total-90644
CC-MAIN-2017-04
refinedweb
224
55.2
Tags give the ability to mark specific points in history as being important - Sort by - Name - Oldest updated - Last updated - v2.0.1 - !108 [bdt new] added bob.extension to templates used for creating new bob.package: This merge request will close #38. - !109 Remove conda package caching as we now have a mirror - !110 Do not spare downloaded packages from getting cleaned - !112 Bumped conda-build: This MR solves the remaining issue for using conda-build 3.18 (and ripgrep in particular), in our builds. Closes #32 - !113 'info/LICENSE' not in info/files: Trying to solve this issue. bob.nightlies#53 - !115 Consolidating the hacks implemented to solve conda-build issues - !120 This MR addresses the following changes: 1. Update miniconda installer to 4.7.12; 2. Reset installer hashes to avoid Mac re-downloads; 3. Black-ify bootstrap source code - !121 Bump bob and beat-devel versions: Implements removal of packagingpin. - v2.0.0 - !44 Add option to --use-local in build/test: Provide an option to build/test to use local channel for searching packages; Use local packages on nightlies and base builds Closes #26 - !60 added command to generate documentations locally: This merge request added a new group of commands ("local") to "bdt" command which allows to run the functionalities implemented for CI to run locally. For the moment this group only has "docs" command to generate documents of bob/beat packages locally. fixes partially #19 - !65 Update matplotlibrc -- turn off latex usage - !68 Removed use-local option: It seems the "use-local" is the norm the the new conda-build. This MR removes this option in our builds. - !70 [docs-ci] isntalling minimal fonts on the linux docker image: This merge request installs minimal fonts (dejavu-sans-fonts) on the linux docker image so documentations specifically graphviz renders properly. - !71 update openfst version - !73 New python-gitlab v1.10: Updates python-gitlab to the latest version (v1.10.0) - !74 Enable py37 base-builds (bob.conda): This MR does the first step in enabling python 3.7 builds for the core infrastructure (base packages, i.e. those in bob.conda will now be built in both py36 and py37 variants). - !75 [bootstrap] Pinning miniconda installer: This MR pins the miniconda installer to circumvent issues with latest installer (see) This should be reverted back when #32 is addressed. - !76 Update sqlite for py37 compatibility: This improves build compatibility for bob.conda on python-3.7.x - !77 [cython] Update to 0.29.12: Update required to recompile menpo/kaldi for python-3.7.x. Closes #36 - !78 Implements support for specific build-config and recipe-append files: This MR provides a solution to bob.conda#71 by allowing developers to provide branch-specific conda_build_config.yamland recipe_append.yamlfiles. All ci scripts and functions will now search for a branch specific files conda_build_config-my-branch.yamland recipe_append-my-branch.yamlon first on each recipe directory, then on the current directory. If any of those are found, then use these instead of the defaults to build the said package. If they are not found, search for files with names conda_build_config.yamland recipe_append.yamlon both the recipe dirs, then on the current directory. If any of those are found, they are used. Otherwise, if no specific build variants or recipe-append files are found, use the defaults shipped with this package. Closes bob.conda#71 - !79 Update bob/beat-devel and sync with available py36/37 builds: This MR updates conda_build_config.yamlto keep both py36 and the new py37 base environments in sync: * Update versions of bob/beat-devel to 2019.07.31 * Update the version of madmom to 0.16.1 * Add mido-1.2.9 (madmom dependence) - !80 Update bob devel / pin conda to 4.6: This MR addresses one issue: It brings back menpofit to the list of supported software until second notice. Otherwise, bob.ip.facelandmarks does not build - !83 Add a bdt local build command - !84 Fix NameErrors in ci base-build and ci test commands: You can see the code failing in - !86 Add black dependence: This MR adds black (code formatter) from the defaults channel as a runtime dependence. - !88 [scripts/ci] Fix nightly builds: This MR fixes a copy-n-paste typo on the nightly builds. - !89 Docformatter: This MR adds a new dependence to the software stack of bdt, called docformatter. It complements black in re-structuring source code (mostly for docstrings, which get badly formatted with that tool). It also cleans-up unused imports in scripts/local and applies DRY for loading order files. It also applies the results of passing black followed by docformatter on all source code. - !90 [conda] Drop free channel from the defaults list: Fixes #33 - !91 [conda-build-config] Bring back zc_recipe_egg (bob.buildout dep): This MR fixes the nightly builds by bringing back the variable zc_recipe_eggon conda_build_config.yaml. - !94 Implement easy WebDAV access: This MR implements a few commands to ease the management of WebDAV resources on our (internal) servers. bdt dav --helpshould be useful. - !97 Check expected miniconda.sh hash, redownloads if necessary: This MR introduces a hash check on the expected version of the miniconda installer. It re-downloads the installer if the hash does not match or if it is not cached. This solves an issue in which more recent versions of the conda (4.7.x) take very long to bootstrap bob.devtools. - !99 WebDAV support improvements: This MR adds support for the clean-betasscript, which replaces the ad-hoc solution currently installed on our server. - !100 More cleanup improvements: This MR: * Hooks the execution of an automatic cleanup action to the successful execution of the build step in nightlies * Provides a manual action to trigger the cleanup. * Makes the clean-betas action a bit more selective on which types of packages it should be cleaning-up (e.g. if running Bob nightlies, do not cleanup BEAT packages, and vice-versa) - !101 Changed version of torch and torchvision: Changed version of torch and torchvision - !103 Enable py37: This MR enables py37 builds everywhere. It also improves the syntax of CI files to use the new extendskeyword instead of YAML templates. - !104 Remove oset, add pandas and tqdm: * oset: The package is not used anymore in BEAT. * pandas, tqdm: New requirement for bob.ip.binseg Relates to bob.conda!431 - !105 [bootstrap] Use local defaults channel mirror - !106 Add ncurses for beat/beat.cmdline>: ncurses is needed for the new monitoring command of beat/beat.cmdline> See beat/beat.cmdline!72 - v1.0.5 - v1.0.4 - v1.0.3 - !53 Update to bob-devel 2019.04.30 - !54 Update conda_build_config.yaml: Add h5py pinning - !55 Adds menpo, menpofit and cyvlfeat - !56 Updates jsonschema to 3.0.1 - This MR bumps the version of jsonschema to 3.0.1 on request from beat/beat.core#80. By consequence, it also updates beat-devel to a matching version. - v1.0.2 - !52 Allow for custom conda_build_config.yaml and recipe_append.yaml files: This MR introduces a new feature: if a file named conda_build_config.yamlor recipe_append.yamlexist on the recipe directory, alongside meta.yaml, then they are used to build the package instead of the default files available inside this package. - v1.0.1 - !35 Issue 21: This commit fixes an issue with the upload of documentation for new packages. It ensures the package directory is created on the DAV server before an attempt to upload its documentation. - !36 Fix conda/meta.yaml template resolution {{ group }} -> (( group )): This MR fixes an issue when creating new projects. It fixes a place where the template for groupwas not being taken into consideration. Closes #22. - !37 Set macos maximum open file limit to 4096: This MR closes issue #23 and adjusts the docs so that the maximum number of open files on macOS is correctly set from the installation. - !38 Build with multiple hdf5 versions at the same time: Update conda_build_config.yaml - !39 Remove hdf5 1.10.1: Update conda_build_config.yaml We don't need to build against this anymore. - !40 Update dependencies: Depends on bob.conda!409 - !41 Fix numpy and mkl version incompatibility: Update conda_build_config.yaml This should fix issues in: bob.conda!411 - !42 Update numpy to earliest version that supports mkl 2019.1: Still trying to fix: bob.conda!411 - !43 [conda-build-config] update freetype - !46 Force re-indexing of locally built packages if "local" channel is present: Apparently, indexes are not refreshed during a tight loop of conda_build.api.build()calls. This MR mitigates it by re-indexing conda-bldat every iteration, if need-be. - !45 Requires pyyaml>=5.1, remove deprecation warnings: This MR removes the warning on yaml.load()caused by a deprecation. - !47 Avoid deploy deletes, so tight loop package builds work: This MR avoids deploy deletes so that nightlies and bob.conda packages can re-use locally built packages in a tight build loop. - !44 Add option to --use-local in build/test: Provide an option to build/test to use local channel for searching packages; Use local packages on nightlies and base builds Closes #26 - !48 [conda-build-config] Update bob-devel to 2019.04.18: Update conda_build_config.yaml - !49 Add PYTHONUNBUFFERED=1 to mitigate #2: This MR sets PYTHONUNBUFFERED=1 everywhere in the CI yaml files. It closes #2. Previous attempts did not work as the interpreter was already started when this option was set. - !50 Set bob/bob.devtools#!final for default channels (closes #25): This MR ensures that the defaults_channellist is not changed by conda while being parsed. - !51 Avoid stable/master doc deployment from non-master via flag: This MR introduces an option to avoid documentation deployments on the sub-directories masterand stable, if the release is not happening from the masterbranch. It is useful for old patch releases for which one doesn't want to overwrite previous deployments on those directories. - v1.0.0 - This is a major release as it marks the official deployment of bdt for managing Bob/BEAT builds in all our packages. Specific changes to the previous release follow: - !18 Refines the commitfilecommand to introduce: - Better log messages, more pertinent and with more information about what it is really doing - The ability to commit directly to master (which is the default behaviour now) - In merge-request creation mode, if [ci skip]or [skip ci]is detected on the commit message, and auto-merge is set, then merge immediately - !20 Add support for packages required to run linux builds on docker-build hosts - !21 This MR introduces three new features: - A new command bdt testcan run test-only conda-builds using tarballs created during a build phase - An equivalent new command bdt ci testcan be used to run test-only builds on the CI - A template inside single-package.yamlcan be used among packages to run specific test-only builds of themselves - !19 [conda-build-config] Update bob/beat-devel versions: Updates bob and beat-devel packages to version 2019.03.07. - !23 New doc strategy: Implements and proposes (for new templates) a documentation source installation strategy that makes conda packages testable without the sources (improves previous solution for #5). - !24 Allows package namespace to be set and fine-tune conda/documentation urls: Improves the interface of build/test/create to accept a package namespace (group) in order to better tune documentation and conda channel URLs. This fixes a problem currently happening in BEAT package builds in which built documentation is not properly linked. It also improves server URL setup by simplifying it. - !26 Introduces the following features: - Consolidate all gitlab-related commands under the command "gitlab" - Add a new command "gitlab runners" to manage runners in projects (enable/disable) - Add a new command "gitlab jobs" to list jobs in shared runners - !27 Add missing parameter to create script (closes #15): Closes #15. - !28 Nightlies: Centralises nightly builds using bdtinfrastructure. - !29 Add notes about linux CI setup - !25 Do not overwrite conda-build root dir if already configured: The condarc file can contain the root-dir entry which state where conda-build should run and store its artifacts. Don't set croot if that setting is found. - !30 Do not set changeps1 to false as that affects deactivation of PS1 on bash: This MR includes the following fixes: - Do not set changeps1 to false as that affects deactivation of PS1 in hybrid environments (closes #16) - Remove unused channels from default environment - Use the name condarcfor the rc-file created at the root of each new environment with create(more visible) - !31 Add new deployment module and applies DRY to scripts.ci (closes #18): This MR only partially affects issue #18 by encapsulating deployment functionality into two new functions. - !32 Update click version to 7.x, click-plugins to 1.0.4 - !33 No config set when using conda_build.api.build() directly from metadata: This MR implements a change in the use of conda_build.api.build()in the specific cases we are building from the resolved metadata. It reverts using metadata to run the build as that has proved not be reliable. See: - v0.1.6 - !13 Added conda-verify to avoid a recurring warning during package builds. - !14 Implemented resources necessary to build bob/bob.conda packages. - !15 Allows the build of base dependencies separating python-independent recipes from python-dependent ones, and potentially using multiple python versions. - !16 Introduced a new command called commitfilethat is capable to change single files on remote repositories, following a formal merge-request procedure. As an option, it can set auto-merge to true (in case the build succeeds). - !17 Introduced the use of terminal-based coloring for better differentiating info, warning and error messages. - v0.1.5 - v0.1.4 - !10 Temporarily pin conda-build on 3.16: See bob.conda!398 - !11 Independent builds and python support: This MR introduces various features: 1. This package is now built completely independently of the bob software stack (and bob.conda). It ships its own version of dependencies and can provide better support for python versions that are not necessarily supported (yet) by Bob or BEAT; 2. As a bonus, you can now install this package alongside your conda baseenvironment; 3. An optimisation during the bootstrap process is now possible since we don't need to install a new environment for bdt using python-3.6, it can just install in the base environment itself; 4. This MR also makes git-clean a bit less verbose, on request form @amohammadi; 5. Finally, it fixes #13, as reported by @heusch - v0.1.3 - !2 [conda-build-config] Update scikit-image version: This MR updates the file conda_build_config.yamlto the latest version available from bob/bob.admin - !4 Implements new project creation closes #3 and #6: This MR introduces the "new" subcommand and implements fixes for bug #3 and #6. - !3 [setup] Pin click>=7 and fixes #9: This MR addresses an issue reported by @jdiefenbaugh concerning the build command and click. - !5 [bootstrap] Remove usage of conda clean --lock, which does nothing with conda 4.6 anyway: This command is now deprecated in conda and does nothing effectively. - !7 [bootstrap] Always use only public channels to bootstrap (closes #11): This MR addresses an issue in which, for private packages, the CI will tend to install bdt from the private beta channel instead of the latest version from the public beta channel, which should be the case. - !6 [config] Pin click at different location: Actually conda_build_config.yamlhas nothing to-do with the dependencies of bdt. So, pin click>=7only for this package. - !9 Add options to allow testing local builds: This will make local building (for testing purposes) work correctly. - !8 This MR suggests a configuration change for macOS-based builders (shared) to avoid cross-talking between miniconda installations. Closes #12. - v0.1.2 - Fixes stable package deployment on conda/documentation channels - v0.1.1 - Fixes PyPI deployment - v0.1.0 - Initial stable release
https://gitlab.idiap.ch/bob/bob.devtools/-/tags
CC-MAIN-2019-47
refinedweb
2,660
58.58
Details Description To reproduce the issue, please run the e2e test "RubyUDFs_1" in MR mode from the tarball (not from installed Pig - please see why below). Either pseudo-distributed-mode or full-mode Hadoop can be used. ant -Dhadoopversion=23 -Dharness.old.pig=`pwd` -Dharness.cluster.conf=/etc/hadoop/conf/ -Dharness.cluster.bin=/usr/lib/hadoop/bin/hadoop test-e2e -Dtests.to.run="-t RubyUDFs_1" The test fails with the following error: java.lang.IllegalStateException: Could not initialize interpreter (from file system or classpath) with /home/cheolsoo/pig-0.10/test/e2e/pig/testdist/libexec/ruby/scriptingudfs.rb Looking at the job jar generated by Pig, "scriptingudfs.rb" can be found as follows: [cheolsoo@c1405 pig-cheolsoo]$ jar tvf bad.jar | grep scriptingudfs.rb 2491 Fri Jun 08 15:52:08 PDT 2012 /home/cheolsoo/pig-0.10/test/e2e/pig/testdist/libexec/ruby/scriptingudfs.rb Looking at getScriptAsStream() method in ScriptEngine.java, "scriptingudfs.rb" is supposed to be read from the job jar, but it is not. The reason is because getResourceAsStream("/x") looks for "x" (without the leading "/") not "/x". Since "scriptingudfs.rb" is stored with it absolute path, it ends up being not found by getResourceAsStream(scriptPath). File file = new File(scriptPath); if (file.exists()) { try { is = new FileInputStream(file); } catch (FileNotFoundException e) { throw new IllegalStateException("could not find existing file "+scriptPath, e); } } else { if (file.isAbsolute()) { is = ScriptEngine.class.getResourceAsStream(scriptPath); } else { is = ScriptEngine.class.getResourceAsStream("/" + scriptPath); } } In fact, the test passes if you run in local mode or from installed Pig. The reason is because "scriptingudfs.rb" is found in local file system (e.g /usr/share/pig/test/e2e/pig/udfs/ruby/scriptingudfs.rb). The fix seems straightforward. Attached is the patch that removes the leading "/" when registering UDF scripts so that they are stored without the leading "/" in the job jar as follows: [cheolsoo@c1405 pig-cheolsoo]$ jar tvf good.jar | grep scriptingudfs.rb 2491 Fri Jun 08 15:52:08 PDT 2012 home/cheolsoo/pig-0.10/test/e2e/pig/testdist/libexec/ruby/scriptingudfs.rb Thanks! Issue Links Activity - All - Work Log - History - Activity - Transitions I updated the patch to handle not only a leading "/" but also "./". In fact, it is not necessary to worry about "./" since FileLocalizer.fetchFile() already converts relative paths to absolute paths; nevertheless, it seems like a good idea to make this method more robust anyway. In addition, I replaced substring(1) with replaceFirst("^\\./|^/", "") because the latter seems more intuitive. +1. Tested this patch with relative path and absolute path for 20.205 and 23. Works fine. Daniel, Can you include this in 0.10 also. Without this scripting udfs do not work in 23 for both relative and absolute path. e2e tests currently marked ignored for MAPREDUCE-3700 also will have to be enabled again. Looks good. I also attach a java code to demonstrate the problem. The patch fix the issue in Ruby. The issue for Python relative path is still there. Hi Daniel, thanks for submitting my patch! I am wondering why you think that the issue with relative paths for Python still exists. In my YARN cluster, the Scripting_* tests (excluded due to MAPREDUCE-3700) all pass. (Technically, I am using Hadoop-2.0.0, but that shouldn't make a difference.) I can also manually verify that it works in Grunt shell. My fix shouldn't be Ruby-specific since the problem is with PigServer stuffing any UDF scripts into the job jar. Looking at your test code, one thing that I haven't thought about is "../" although that shouldn't be an issue now as in the registerCode() method, relative paths are always converted to absolute paths by FileLocalizer.fetchFile(). Nevertheless, handling "../" as well might be a good idea to make that method more robust. Thanks! Hi, Cheolsoo, You are right. This issue is fixed as a byproduct of PIG-2623, which convert the relative path to absolute path. All the Scripting tests pass for hadoop 23 now. I will enable those tests for 23. However, there is one another hole left. If we import another python module, Pig cannot pack/refer the path of dependent python module correctly. Here is one example: udf.py: from base import square @outputSchemaFunction("squaresquareSchema") def squaresquare(num): if num == None: return None return (square(num)*square(num)) @schemaFunction("squaresquareSchema") def squaresquareSchema(input): return input base.py def square(num): if num == None: return None return ((num)*(num)) Pig script: register 'udf.py' using jython as myfuncs; a = load '1.txt' as (a0:int); b = foreach a generate myfuncs.squaresquare(a0); dump b; Pig incorrectly pack the base.py as /base.py in job.jar, and fail to refer it in backend. It happens in both 20 and 23. Cheolsoo, I am looking into the issue Daniel mentioned. Should have a fix soon. I also see the same issue with e2e Scripting tests where Jython UDF scripts are not found in classpath. Applying the change that I described let those test pass as well.
https://issues.apache.org/jira/browse/PIG-2745
CC-MAIN-2017-34
refinedweb
838
61.02
Hey guys! I have a question about the FCC problem where you use the reduce method. Link below. It has an object containing multiple movies, and the challenge is to find the average score of Christopher Nolan movies. The code I have is : var averageRating = watchList .filter(movie => {return movie.Director=="Christopher Nolan"}) .reduce((sum, nolanMovies) => { return (sum + parseFloat(nolanMovies.imdbRating)); },0) /(watchList.filter(movie => {return movie.Director=="Christopher Nolan"}).length); // <--there has to be a better //way. ; while this works, I feel it is kind of ugly. As you can see, I could filter out the movie list and get the sum of the ratings but didn’t know how to get the amount of movies there are without invoking the filter method again. I feel like there has to be a better way to do this but I just couldn’t find out. What would be the best way so I can convert the sum of scores into an average?
https://www.freecodecamp.org/forum/t/functional-programming-use-the-reduce-method-to-analyze-data/201817/3
CC-MAIN-2019-26
refinedweb
162
75.81
How to designed the program that without UI - victor wang Hi All, I'm using QT5.5 on my computer and linux+QT for my OS system. I wanna designed a program that can send the command to MCU via RS232. I want to do it without any UI. How can I do that? Thanks in Advanced! When creating your project, simply select "Qt Console Application". - victor wang @JohanSolo Thanks! But I am facing the other problem. I can't find this file when I included. #include <stdio.h> #include <string.h> This is my .pro file. QT +=core QT+=serialport QT-=gui TARGET=mcu CONFIG+= console CONFIG+=app_bundle TEMPLATE=app SOURCES+=main.cpp Why I can't find it in QT console? And I can find the file when i create in QT Application? Thanks in Advanced! AFAIK CONFIG += app_bundleis only valid for macOS, but I don't know whether it hurts (I guess not). How did you install Qt (QT is Apple's QuickTime)? Is is possible you don't have the dev C++ libs? Btw stdio.his a C header, the C++ one is cstdio, and in a similar way, use cstringinstead of string.h. - jsulm Moderators @victor-wang said in How to designed the program that without UI: #include <stdio.h> #include <string.h> Why do you want to use C instead of C++? If you really want do it like this #include <cstdio> #include <cstring>
https://forum.qt.io/topic/83495/how-to-designed-the-program-that-without-ui
CC-MAIN-2018-26
refinedweb
240
79.16
Hi Guys,<?xml:namespace prefix = o<o:p></o:p> <o:p></o:p>I am thinking to install CNG in my Honda City IDSI 2008.<o:p></o:p>I need some feedback that how good this car is on CNG and what are the issues (if any). Furthermore which conversation centre should I go to install CNG. (LHR)I am thinking to put 60 KG single cylinder OR 2 cylinders of 30KG each.<o:p></o:p> <o:p></o:p>Any suggestions are more than welcome.<o:p></o:p> <o:p></o:p>Thanks <o:p></o:p> city idsi runs perfect on cng , no worries , install cng in it n enjoy just go for it its falwless on Cng My two friends were running their Honda citys on CNG since 2006 and they are very happy with them they got no issue till date. NO if it is Vario. YES if its manuall. you will enjoy it on CNG if properly installed from some professional. 55KG cylinder is recommended. depends on ur daily running otherwise 55 is aslo enough for city Thanks for the feedback guys. @MAJsohail.... Yes its manual Daily Running is not much.. you can say 20 to 30 KMs but I travel to ISB frequently. (every after a month or 2 usually)On long route its average is very good also but again whatever it is.. it costs more than double as compare to CNG. Anyways which conversion centre is good and they've experts (in LHR) ? Going through some old threads and noticed that people were against installing CNG in IDSI in 2006... Now the tradition has been totally changed I guess. My Daily running on IDSi is more then 90KM withing the city and its being 3 years I never even tunup the CNG and its working very fine. 55KG cyl and convers 160KMs CNG works very fine in IDSi if orignal parts being installed and from the reputable installer.just keep you air filter Clean all the time CNG is failed in auto transmission in IDSi but in manuall no problem. People facing problem in manuall transmission, either parts, advancer etc are not geniune or installed by a newbi i had city 2006 idsi(manual) on cng it runs perfect Well don't worry about that. Every kid on PW who drives his father's car in which he fills petrol from his father's money, is against CNG and yells his throat out that CNG is harmful..... Go for it..... no problems So true ^ Brother u can contect me @ 0321-4430780 for CNG Kit installation . go for it ... im driving one for long time on cng. Quick Question... What happens with Engine Check Light when we install CNG? sir there is a device known an EMULATOR its basically on its meaning in its name it emulates the ecu of fuel entering from the jets (when in real time GAS is flowing ) also an ADVANCER is installed on all efi cars that is to prempt the timings there is one problem that you might come by and i also have it in my car3 of my friends have also experienced it its negligible but still is an issue in the winters the car is constantly rattling its RPM when its temperature COOL light is onand sometimes it does when its warm as well my car sometimes takes a looooooooong sulf when it gets switched off cux of RPM rattle and there are worst night mares the BRAKES dont work optimum when its rattling and the electronics shiver as wellthere is no issue with the steering as its not hydraulic in the city IDSI but all there issues are very very minor there is no major difference in performance if u have some good filling station of CNG who is reliable and reputed its really economical to use this car on CNG and we should Dear Brother Wasif , If u r still in any trouble regarding any issue specially CNG . then dnt hasitate to contect me @ 03214430780. I will guide u inshallah . i don't see the need for cng, this car is giving a really good average on petrol. if driven properly.
https://www.pakwheels.com/forums/t/honda-city-idsi-on-cng/139684
CC-MAIN-2018-05
refinedweb
703
68.1
The next example shows how to use the new statement to create an instance of a user-defined object. The object function -- the thing that defines the object -- is also shown. This simplistic object assigns the string passed to it to a name property. That property is displayed in an alert box, using the syntax alert ( newObj.name). newObj = new myObject ("Gordon"); alert (newObj.name); function myObject (Param1) { this.name = Param1; return (this); } Return The return statement is used to mark the end of a function. When JavaScript encounters the return statement, it "returns" to that spot immediately following the call to the function. The return statement can be used with and without a return value. If a value is included with the return statement, then the function returns that value. If no value is included with the return statement, then the function returns a null (nothing). The return statement may not be used outside of a function. JavaScript reports an error if you attempt to use return outside a function. Here are two examples of return, with and without a value: function myFunc() { var OutString = "This is a test"; return (OutString); } function myFunc() { OutString = "This is a test"; return; } It is common JavaScript practice to omit the parentheses surrounding the return value. I use the parentheses in deference to traditional C programming practice. Feel free to drop the parentheses. When using return more than once in a function, be sure to use it in the same way each time, otherwise JavaScript will display a warning message. For instance, suppose your function tests if a value passed to it is less than 5. You want the function to return differently depending on whether this value is less than or more than 5. Here are the wrong and right ways to do it. Note that only one return statement returns a value. First, the wrong way: function testMe (Val) { if (Val < 5) return (true); else return; } Now, the right way: function testMe (Val) { if (Val < 5) return (true); else return (false); } This The this keyword (it's not really a statement) refers to the current object and is shorthand for using the formal name of the object. It is typically used in one of three ways: - To refer to the current form or control in an event handler (such as onClickor onSubmit) - To define a new property in a user-defined object You can use the this operator with or without an object name, as in this or this.object. The object name helps to disambiguate what "this" means. For instance, suppose you want to return the whole form with an onClick event handler. You'd do it this way, to tell JavaScript you want the entire form object: <INPUT TYPE="button" NAME="button" VALUE="Click" onClick="test(this.form)" > Using just the this keyword, the onClick event handler passes only the button. <INPUT TYPE="button" NAME="button" VALUE="Click" onClick="test(this)" > Var The var statement is usedis used to explicitly declare a variable. You may also define a value for the variable at the same time as you declare it, but this is not necessary. The var statement also sets the "scope" of a variable when the variable is defined inside a function. More about this in a bit. The basic syntax is: var VariableName; or var VariableName = value; VariableNameis the name of the variable you wish to use. - value is the value you want to assign to the variable. In the first example, the variable VariableName is declared (allocated in memory), but its contents remain empty. In the second example, the VariableName variable is declared, and a value is assigned to it at the same time. Used outside of a user-defined function, both of these do exactly the same thing: var VariableName = "value"; VariableName = "value"; Both create a global variable (the variable is considered "global in scope") -- that is, a variable that can be accessed from any function in any window or frame that is currently loaded. For example, the following works because MyVar is declared outside the function and is therefore global (again, the var statement is optional here): var MyVar = 5; // or MyVar = 5; function testVariable () { alert (MyVar); } Conversely, an error will occur if the MyVar variable is assigned in another function with the var statement, making it local to that function. The variable is considered "local in scope." function defineVariable() { var MyVar = 5; } function testVariable () { alert (MyVar); } You also use the var statement if you've defined a global variable (see above) and want to use a separate, local variable of the same name in a function. In this example, MyVar is declared globally outside the testVariable() function. It is again defined inside the testVariable() function, this time with the var keyword to make it local. When this test is run, JavaScript displays 10 in the first alert box in the testVariable() function, and 5 in the anotherTest() function. var MyVar = 5; function testVariable () { var MyVar = 10 alert ("testVariable function: " + MyVar); anotherTest(); } function anotherTest() { alert ("anotherTest function: " + MyVar); While The while statement sets up a unique repeating loop that causes the script to repeat a given set of instructions. The looping continues as long as the expression in the while statement is true. When the while statement proves false, the loop is broken and the script continues. Any JavaScript code inside the while statement block -- defined by using the { and } characters -- is considered part of the loop and is repeated. The syntax of the while statement is: while (Expression) { // stuff to repeat } In the following example, the while loop is repeated for a total of ten times. With each iteration of the loop, JavaScript prints text to the screen. Count=0; while (Count <10) { document.write ("Iteration: "+Count+"<BR>") Count++; } Here's how the this example works. First, the Count variable is set to 0. This is the initial value that will be compared in the while expression, which is Count <10. Count starts at 0. With each iteration of the loop, text is printed to the screen, and the Count variable is increased by 1. The first ten times the loop is repeated Count <10 is true, so the loop continues. After the tenth trip, the Count variable contains the number 10, and the while expression proves false. The loop is broken, and JavaScript skips to the end of the while statement block (that portion after the } character). The while loop is what is known as an entry loop. The test expression appears at the beginning of the loop structure. The contents of a while loop may never be executed, depending on the outcome of the test expression. For example, Response = prompt ("Please enter a number greater than 1"); Count = 1; while (Count <= Response) { document.write ("Count: "+ Count + "<BR>"); Count++; } In this JavaScript example, a prompt method asks you to enter a number greater than 1. This number is then used to control the iterations of a while loop. If you type 5, for example, the loop will repeat five times. The loop continues as long as the while expression -- Count<=Response -- remains true. Notice that the Count variable starts at 1. In this way, if you enter a 0 in response to the prompt, the while loop fails, and the instructions inside the while loop block are never executed (because 1<=0 is false). While loops are particularly handy for insisting that the user enters valid data. Here's a simplified version of a data validator (try the example to see how it works): Response = ""; while ((Response == "") || (Response == "<undefined>")){ Response=prompt ("Please enter your name", ""); } if (Response != null) alert ("Hello, " + Response); The first step is to assign an empty string to the Response variable. Since Response is empty, the while expression evaluated to true, and the prompt dialog appears, asking for the user's name. Since the Response string is empty, the prompt dialog appears within the text box. The script will not allow you to proceed unless you either click the Cancel button or type some text. Notice the if test. Should the user click on Cancel, the value returned from the prompt dialog is null. Should the value be other than null, the script assumes it's a valid name, and an alert box greeting the user is displayed. With The with statement is designed as a time and space saver. You use it to help cut down on extraneous keystrokes when writing a JavaScript program. The with statement is typically used with the built-in Math object, as it requires you to specify the name of the object when accessing any of its properties, such as: alert (Math.PI); alert (Math.round(1234.5678)); The syntax for with is: with (object) { statements } - object is the object you want to use - statements are one or more statements you want to execute using the object as the default By using the with statement, you can cut out the references to the Math object, as it is implied. Be sure to enclose all the property constructions you wish to associate with the with statement inside { and } characters. These characters define the with statement block and are mandatory. with (Math) { alert (PI); alert (round(1234.5678)); } You are not limited to using the with statement to just Math objects. You can use it with almost any object, even objects you define yourself. For example, suppose you want to perform several actions on the document.forms[0].textbox object -- a textbox control in the first form of the document. There's no need to define this long and drawn out object name; you can use the with statement instead. function test () { with (document.forms[0].textbox1) { alert (name); alert (value); } } Because you've used the name and value properties in a with statement block, JavaScript automatically assumes they refer to the object specified in the with expression, in this case document.forms[0].textbox1. Conclusion Statements are the glue that holds any JavaScript program together. While JavaScript doesn't have as many statements as some other languages, including its "big brother" Java, the variety of statements it does offer provides ample opportunity for programmers to create almost any application they can think up. Learn more about this topic - Password protect a file using JavaScript: - Netscape's documentation for JavaScript: - JavaScript index of sites: - Good entry-level tutorial on using JavaScript: - "Ask the JavaScript Pro" provides solutions to selected problems: This story, "Understanding and using JavaScript statements" was originally published by JavaWorld.
https://www.infoworld.com/article/2077317/understanding-and-using-javascript-statements.html?page=2
CC-MAIN-2020-34
refinedweb
1,757
61.56
Machine learning has been used for years to offer image recognition, spam detection, natural speech comprehension, product recommendations, and medical diagnoses. Today, machine learning algorithms can help us enhance cybersecurity, ensure public safety, and improve medical outcomes. Machine learning systems can also make customer service better and automobiles safer. When I started experimenting with machine learning, I wanted to come up with an application that would solve a real-world problem but would not be too complicated to implement. I also wanted to practice working with regression algorithms. So I started looking for a problem worth solving. Here’s what I came up with. If you’re going to sell a house, you need to know what price tag to put on it. And a computer algorithm can give you an accurate estimate! In this article, I’ll show you how I wrote a regression algorithm to predict home prices. Regression in a nutshell Put simply, regression is a machine learning tool that helps you make predictions by learning – from the existing statistical data – the relationships between your target parameter and a set of other parameters. According to this definition, a house’s price depends on parameters such as the number of bedrooms, living area, location, etc. If we apply artificial learning to these parameters we can calculate house valuations in a given geographical area. The idea of regression is pretty simple: given enough data, you can observe the relationship between your target parameter (the output) and other parameters (the input), and then apply this relationship function to real observed data. To show you how regression algorithm works we’ll take into account only one parameter – a home’s living area – to predict price. It’s logical to suppose that there is a linear relationship between area and price. And as we remember from high school, a linear relationship is represented by a linear equation: y = k0 + k1*x In our case, y equals price and x equals area. Predicting the price of a home is as simple as solving the equation (where k0 and k1 are constant coefficients): price = k0 + k1 * area We can calculate these coefficients (k0 and k1) using regression. Let’s assume we have 1000 known house prices in a given area. Using a learning technique, we can find a set of coefficient values. Once found, we can plug in different area values to predict the resulting price. [In this graph, y is price and x is living area. Black dots are our observations. Moving lines show what happens when k0 and k1 change.] But there is always a deviation, or difference between a predicted value and an actual value. If we have 1000 observations, then we can calculate the total deviation of all items by summing the deviations for each k0 and k1 combination. Regression takes every possible value for k0 and k1 and minimizes the total deviation; this is the idea of regression in a nutshell. But in real life, there are other challenges you need to deal with. House prices obviously depend on multiple parameters, and there is no clear linear relationship between all of these parameters. Now I’m going to tell you how I used regression algorithms to predict house price for my pet project. How to use regression algorithms in machine learning 1. Gather data The first step for any kind of machine learning analysis is gathering the data – which must be valid. If you can’t guarantee the validity of your data, then there’s no point analyzing it. You need to pay attention to the source you take your data from. For my purposes, I’ve relied on a database from one of the largest real-estate portals in the Netherlands. Since the real estate market in the Netherlands is strictly regulated, I didn’t have to check its validity. Initially I had to do some data mining because the required data were available in various formats across multiple sources. In most real-world projects you’ll have to do some data mining as well. We won’t discuss data mining here, however, as it’s not really relevant to our topic. For the purpose of this article, let’s imagine that all the real-estate data I found were in the format shown below. 2. Analyze data Once you’ve gathered data it’s time to analyze it. After parsing the data I got the following records: { "has_garden" : 1, "year" : 1980, "lng" : 4.640685, "has_garage" : 0, "changes_count" : "2", "area" : 127, "bedrooms_count" : "4", "com_time" : 57, "price" : 305000, "energy_label" : 1, "lat" : 52.30177, "rooms_count" : "5", "life_quality" : 6, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 4.656503, "has_garage" : 0, "changes_count" : "3", "area" : 106, "bedrooms_count" : "3", "com_time" : 64, "price" : 275000, "energy_label" : 3, "lat" : 52.35456, "rooms_count" : "4", "life_quality" : 5, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 4.585596, "has_garage" : 0, "changes_count" : "3", "area" : 106, "bedrooms_count" : "3", "com_time" : 74, "price" : 244000, "energy_label" : 3, "lat" : 52.29309, "rooms_count" : "4", "life_quality" : 6, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 4.665817, "has_garage" : 0, "changes_count" : "2", "area" : 102, "bedrooms_count" : "4", "com_time" : 77, "price" : 199900, "energy_label" : 3, "lat" : 52.14919, "rooms_count" : "5", "life_quality" : 6, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 4.620336, "has_garage" : 0, "changes_count" : "1", "area" : 171, "bedrooms_count" : "3", "com_time" : 79, "price" : 319000, "energy_label" : 1, "lat" : 52.27822, "rooms_count" : "4", "life_quality" : 6, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 5.062804, "has_garage" : 1, "changes_count" : "1", "area" : 139, "bedrooms_count" : "5", "com_time" : 38, "price" : 265000, "energy_label" : 5, "lat" : 52.32992, "rooms_count" : "6", "life_quality" : 7, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 5.154957, "has_garage" : 1, "changes_count" : "2", "area" : 129, "bedrooms_count" : "4", "com_time" : 57, "price" : 309500, "energy_label" : 1, "lat" : 52.35634, "rooms_count" : "5", "life_quality" : 6, "house_type" : 1, "is_leasehold" : 0 } { "has_garden" : 1, "year" : 1980, "lng" : 4.622486, "has_garage" : 0, "changes_count" : "1", "area" : 125, "bedrooms_count" : "4", "com_time" : 76, "price" : 289000, "energy_label" : 1, "lat" : 52.2818, "rooms_count" : "5", "life_quality" : 6, "house_type" : 1, "is_leasehold" : 0 } Here’s what the fields mean: has_garden – does the property has a garden? 1 – true, 0 – false year – year of construction lat, lng – house location coordinates area – total living area has_garage – does the property has a garage? 1 – true, 0 – false bedrooms_count – amount of bedrooms rooms_count – total rooms count energy_label – energy efficiency label (assigned for each house in the Netherlands) life_quality – life quality mark calculated for each district by local authorities house_type – property type (1 – house, 0 – appartement) com_time – commuting time to Amsterdam center changes_count – transport changes count if you go to Amsterdam center by public transport I worked on the assumption that these are all measurable data that affect a home’s price. Of course, there may be more parameters that matter as well, such as house condition and location. But these parameters are more subjective and almost impossible to measure, so I ignored them. 3. Check the correlation between parameters Now you need to check for strong correlations among given parameters. If there are, then remove one of the parameters. In my dataset there were no strong correlations among values. 4. Remove outliers from the dataset Outliers are observation points that are distant from other observations. For example, in my data there was one house with an area of 50 square meters for a price of $500K. Such houses may exist on the market for various reasons, but they are not statistically meaningful. I want to make a price estimate based on the market average, and so I won’t take such outliers into account. Most regression methods explicitly require outliers be removed from the dataset as they may significantly affect the results. To remove the outlier I used the following function: def get_outliners(dataset, outliers_fraction=0.25): clf = svm.OneClassSVM(nu=0.95 * outliers_fraction + 0.05, kernel="rbf", gamma=0.1) clf.fit(dataset) result = clf.predict(dataset) return result This will return – 1 for outliers and 1 for non-outliers. Then you can do something like this: training_dataset = full_dataset[get_outliners(full_dataset[analytics_fields_with_price], 0.15)==1] After that you will have non-outlier observations only. Now it’s time to start regression analysis. 5. Choose a regression algorithm There’s more than one way to do regression analysis. What we’re looking for is the best prediction accuracy given our data. But how can we check accuracy? A common way is to calculate a so-called r^2 score which is basically a squared difference between an actual and a predicted value. It’s important to remember that if we use that same dataset for learning and checking our accuracy, our model may overfit. This means it will show excellent accuracy on a given dataset but will completely fail when given new data. A common approach to solve this problem is to split the original dataset into two parts and then use one for learning and another for testing. This way we will simulate new data for our learning model, and if there is an overfit, we can spot it. We can split our dataset using a proportion of 80/20. We’ll use 80% for training and the remaining 20% for testing. Let’s take a look at this piece of code: //code for algorithm quality estimation from sklearn import svm import matplotlib.pyplot as plt from sklearn.neighbors import KNeighborsRegressor from sklearn.linear_model import LinearRegression, LogisticRegression from sklearn.svm import SVR from sklearn.metrics import r2_score from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestRegressor import pandas as pd #prepare dataset #.... #spilt dataset Xtrn, Xtest, Ytrn, Ytest = train_test_split(training_dataset[analytics_fields], training_dataset[['price']], test_size=0.2) # model = RandomForestRegressor(n_estimators=150, max_features='sqrt', n_jobs=-1) # случайный лес models = [LinearRegression(), RandomForestRegressor(n_estimators=100, max_features='sqrt'), KNeighborsRegressor(n_neighbors=6), SVR(kernel='linear'), LogisticRegression() ] TestModels = pd.DataFrame() tmp = {} for model in models: # get model name m = str(model) tmp['Model'] = m[:m.index('(')] # fit model on training dataset model.fit(Xtrn, Ytrn['price']) # predict prices for test dataset and calculate r^2 tmp['R2_Price'] = r2_score(Ytest['price'], model.predict(Xtest)) # write obtained data TestModels = TestModels.append([tmp]) TestModels.set_index('Model', inplace=True) fig, axes = plt.subplots(ncols=1, figsize=(10, 4)) TestModels.R2_Price.plot(ax=axes, kind='bar', title='R2_Price') plt.show() As a result, I got the following graph: As you can see, the RandomForest regressor showed the best accuracy, so we decided to use this algorithm for production. Price prediction in production works pretty much the same as in our test code except there’s no need to calculate r^2 and switch models anymore. At this point, we can offer fair price predictions. We can compare the actual price of a house with our predicted price and observe the deviation.
https://yalantis.com/blog/predictive-algorithm-for-house-price/
CC-MAIN-2022-21
refinedweb
1,780
56.25
This article describes a simple approach to convert Excel to HTML by using C#/VB.NET. We will use as few codes as you can image to convert Excel to HTML in C#/VB.NET, with the perfect quality as you expect. The appearance of data can be improved as you can edit the HTML by applying personal CSS. Thus we believe any effect and requirement you or your clients wished to have. Spire.XLS for .NET provides a well-designed class-workbook, which allows us to create new Excel or load an Excel in, and then we can convert Excel to HTML; meanwhile most optimization on the Excel is also allowed during the conversion. Spire.Xls for .NET is always welcome to any kind of trial and evaluation. So now please feel free to download Spire.XLS for .NET and then follow our guide to quickly convert Excel to HTML using C#/VB.NET or try other function of Spire.Xls for .NET. Now please check my code samples for converting Excel to HTML in C#/VB.NET. using Spire.Xls; namespace Xls2Html { class Program { static void Main(string[] args) { //load Excel file Workbook workbook = new Workbook(); workbook.LoadFromFile(@"..\..\test.xls"); //convert Excel to HTML Worksheet sheet = workbook.Worksheets[0]; sheet.SaveToHtml("sample.html"); //Preview HTML System.Diagnostics.Process.Start("sample.html"); } } } Imports Spire.Xls Namespace Xls2Html Class Program Private Shared Sub Main(args As String()) 'load Excel file Dim workbook As New Workbook() workbook.LoadFromFile("..\..\test.xls") 'convert Excel to HTML Dim sheet As Worksheet = workbook.Worksheets(0) sheet.SaveToHtml("sample.html") 'Preview HTML System.Diagnostics.Process.Start("sample.html") End Sub End Class End Namespace Now we can run this application and then a perfect HTML will be presented in your browser. Please see our effective screenshot as below:
http://www.e-iceblue.com/Knowledgebase/Spire.XLS/Program-Guide/How-to-Convert-Excel-to-HTML.html
CC-MAIN-2014-52
refinedweb
302
58.38
AxKit::XSP::IfParam - Equivalent of XSP Param taglib, but conditional. Add the taglib to AxKit (via httpd.conf or .htaccess): AxAddXSPTaglib AxKit::XSP::IfParam Add the if-param: namespace to your XSP <xsp:page> tag: <xsp:page Then use the tags: <if-param:foo> Someone sent a foo param! Value was: <param:foo/> </if-param:foo> This library is almost exactly the same as the XSP param taglib, except it gives conditional sections based on parameters. So rather than having to say: <xsp:logic> if (<param:foo/>) { ... } </xsp:logic> You can just say: <if-param:foo> ... </if-param> Which makes life much easier. Matt Sergeant, matt@axkit.com This software is Copyright 2001 AxKit.com Ltd. You may use or redistribute this software under the same terms as Perl itself.
http://search.cpan.org/dist/AxKit-XSP-IfParam/IfParam.pm
CC-MAIN-2017-17
refinedweb
130
60.51
Introduction need to configure a virtual machine in the cloud to use them. You are also not able to logon to the machine where your function runs. One of the advantages is that you will never have to patch these servers with updates: Amazon will do that for you. You can write Lambda functions in several languages. Let’s look at the list that is available on the moment you read this blog: logon to your AWS account, in the top menu select Servers, type and choose Lambda: After that, click on the “Create function” button: Click now on the arrow down image under Runtime: You will see the list of all runtimes that are available now: The code I present here, is written in Python version 3.8. Some libraries (for example: default system libraries or the libraries to access AWS services) are already present, other libraries have to be send with your function code to let your function work. In this example, we only use default system libraries and the boto3 library to access AWS functions, so the functions in this example are pretty simple. Connection with AWS Identity and Access Management (IAM) In general, your Lambda functions will also need permissions to use other AWS services. In our shop example, all lambda functions use AWS CloudWatch for logging. The accept and decrypt functions will send data to SNS, the process function will send data to DynamoDB. The decrypt function also uses AWS Key Management Services (KMS) to do the decryption of the data. To give the Lambda function access to these services, the access function can assume a role. This role is written in AWS Identity and Acccess management. We’ll look into that now: go to AWS IAM: In the left menu, click on roles: Let’s look at one role: click on the link AMIS_lambda_accept_role: You see that there is one policy attached: the AMIS_blog_lambda_access_policy. When you click on this policy, you see that the accept access policy allows for the creation of AWS CloudWatch groups, the creation of AWS CloudWatch log streams and the permission to add (put) AWS CloudWatch log events. The policy also allows for publishing SNS events and to get public KMS keys. When you look at the policies for the other Lambda functions, you will see that all policies are slightly different. It is good practice to create a role and a policy for every function: you then use the least privilege security principle. Inside the accept function Let’s go back to Lambda and look at the definition of the accept function. Go to the Lambda service (see the first screen image in this blog if you need to), and click on the link of the AMIS_accept Lambda function (not on the radio button in front of it): You will now see the code of AMIS_accept: import json import boto3 import os # Main function # ------------- def lambda_handler(event, context): from botocore.exceptions import ClientError try: # Log content of the data that we received from the API Gateway # The output is send to CloudWatch # print("BEGIN: event:"+json.dumps(event)) # Initialize the SNS module and get the topic arn. # These are placed in the environment variables of the accept function by the Terraform script # sns = boto3.client('sns') sns_decrypt_topic_arn = os.environ['to_decrypt_topic_arn'] # Publish all the incomming data to the SNS topic # message = json.dumps(event) print ("Message to to_decrypt: " + message) sns.publish( TopicArn = sns_decrypt_topic_arn, Message = message ) # This succeeded, so inform the client that all went well # (when there are errors in decrypting the message or dealing with the data, the client will NOT be informed by the status code) # statusCode = 200 returnMessage = "OK" except ClientError as e: # Exception handling: send the error to CloudWatch # print("ERROR: "+str(e)) # Inform the client that there is an internal server error. # Mind, that the client will also get a 500 eror when there is something wrong in the API gateway. # In that case, the text is "Internal server error" # # To be able to make the difference, send a specific application text back to the client # statusCode = 500 returnMessage = "NotOK: retry later, admins: see cloudwatch logs for error" # To make it possible to debug faster, put anything in one line. Also show some meta data that is in the context # print("DONE: statusCode: " + str(statusCode) + \ ", returnMessage: \"" + returnMessage + "\"" + \ ", event:"+json.dumps(event) + \ ", context.get_remaining_time_in_millis(): " + str(context.get_remaining_time_in_millis()) + \ ", context.memory_limit_in_mb: " + str(context.memory_limit_in_mb) + \ ", context.log_group_name: " + context.log_group_name + \ ", context.log_stream_name: "+context.log_stream_name) return { "statusCode": statusCode, "headers" : { "Content-Type" : "application/json" }, "body": json.dumps(returnMessage) } When you use print statements, these will automatically be send to CloudWatch. If necessary, a new group and a new log stream will be created. To be able to send a message to the SNS topic, the boto3 library is used. We have to know what the Amazon Resource Name (ARN) for the pipeline is. To keep the code clean, the ARN of this pipeline is not in the code itself, but in the environment variables of this function. You can see this when you scroll down: the environment variables are directly under the code for the function: In my example, Terraform is used to deploy the AWS objects. Both the SNS topics and the lambda functions are deployed in the same script, the ARN from the SNS topic is added as an environment variable to the lambda function. In the code, you see how the environment variable is retrieved from the environment, and then the whole event as it is sent to the accept function is used as a message to the decrypt topic. When I discussed the policy for the accept function, you might have asked yourself why we needed KMS keys in this function: we don’t seem to use encryption in this function. Well, the environment variables are always encrypted, using a KMS keys. Lambda uses a default key for this, you can see this by going to the KMS service and click on AWS managed keys in the left window. You will see that one of the keys is aws/lambda: The decryption of the environment variable is done in the background, we don’t have to add code for this ourselves. Handler In this example, it is quite clear where the Lambda function starts: this code just has one function. It is however possible to have multiple functions in your code. The AWS environments needs to know what the name of the function is that will be the starting point for the execution. This is called the handler. You can see the handler name just above the code: in our case it is called “accept.lambda_handler”. In this name, accept refers to the name of the file with the code, in our case accept.py. The part behind the dot, refers to the name of the function: in our case lambda_handler. Testing Lambda functions In the top of the screen, you see some options to test our Lambda function. Let’s try them out: click on the button-down arrow next to “Select a test event”: You can now select “Configure test events”: You see pretty straightforward test json. You can change this in any way you want. When you are ready, change the event name (f.e. to “first”) use the Create button to create the test template: You can fire off the event, by clicking on the Test button: Cloudwatch The test was successful. Let’s look what information is send to CloudWatch: click on the link “logs” (next to “Execution result: succeeded): A new tab opens, with the CloudWatch logs. You can see that there is a log stream, with the date in it. It also has a recent Last Event Time. Click on the most recent Log Stream in this screen: You can see that the command print(“BEGIN: event:”+json.dumps(event)) sent out our test event. The last line is also interesting: it will always be added to every Lambda call, and it contains the Duration, the Billed Duration, the memory size and the max memory size that is used. Lambda’s are billed both based on the number of milliseconds that the function has run, and on the amount of memory that has been assigned. In our case, we assigned the minimum amount of memory possible. If you want to go to Cloudwatch logs without sending a test message, then go to the CloudWatch service and choose the Logs > Log groups item in the left menu. Let’s go back to the tab of the Lambda function and scroll down to where these settings are configured: these settings are below the code, below the environment variables we looked at before. Click on Edit: When you look at the default settings, you can see that the amount of memory is by default 128 MB and the timeout is by default 3 seconds. You can also see the name of the role that we saw earlier. When you need more memory, or more time than you can change these settings. The maximum timeout value is 15 minutes. You will see that when you ask for more memory, then the amount of time that your Lambda function uses will be lower. AWS will use better performing servers for Lambda functions that ask for more memory. When you play along, you will see a different duration for the same Lambda function. The first time will take much more time than the second or the third time. Sometimes, however, the function will create a new log group and then start again with a first invocation which again will take relatively long. In my environment, the first invocation takes more than 1000 milliseconds (one second), where the second or third one takes 100 – 300 milliseconds. This difference is there, because the first time the Lambda function is called, it has to be retrieved and to be put in memory. When this is done and the Lambda is executed, following events can use the same Lambda function.When the function isn’t used for some time, it will be swapped out of memory. We will see more about this in a later blog about the testing of the shop example. Play along I scripted the solution [2]. You can follow along and create this solution in your own environment, see the previous blog [1] and the README.md file in the vagrant directory. Links [1] [2] Link to github account: . For the example in this blog, look in the shop-1 directory.
https://technology.amis.nl/aws/aws-shop-example-lambda/
CC-MAIN-2021-31
refinedweb
1,754
67.89
To be fair, there are uses for running a search that may “fail” and dealing with it tidily. You may have a large set of Items to handle, only some of which have some associated Item - a related device say - so seeking the associated Item will sometimes fail. To be fair, there are uses for running a search that may “fail” and dealing with it tidily. I’m curious to see an example where other means could not be used to ensure that the Item exists. Using the ItemRegistry may be the easiest, but it comes with a performance hit, so it would be best to find another way. That said, there are other useful ItemRegistry methods. In Jython, I use this to check for the existence of an Item… if ir.getItems("Current_Timestamp") == []: This is not used in a rule, but in a script that is creating Items if they do not already exist. This uses ItemRegistry.getItems() (note the ‘s’), which returns a collection of Items matching a regex. The regex being a literal string. The same could be done in a DSL rule using… if (ScriptServiceUtil.getItemRegistry.getItems("item_name_to_check_for_existence") == newArrayList) { This could potentially be used as another workaround for checking existence of an Item given an Item name as a string. For example… val test = ScriptServiceUtil.getItemRegistry.getItems("bad_item_name") if (test == newArrayList) { logInfo("Rules", "Bad item") } else { logInfo("Rules", "Test [{}]", test.get(0).name) test.get(0).sendCommand(ON) } Oh, there’s always other means My point was just that “searching” for a non-existent Item name is not necessarily something terrible to be avoided, and that having methods to deal gracefully with the situation are useful. You’ve provided just such a method already, so thankyou The kind of task I was thinking about would be something like a bunch of lights, where only some have some extra “property”. LampA LampA_preferredColor LampB LampC LampC_preferredColor Curious about this, be interesting to compare with the “traditional” searching/filtering of a Group by name-strings. I imagined the underlying mechanism was much the same. Group limits the scope of search, so maybe of more benefit in large OH system (many Items) I agree. By performance hit, I mean unneeded processing. In your example where only some Items have associated Items, put them in a PreferredColor group and use that for when it’s time to change the color. I just do everything I can to make my rules run as fast and efficiently as they can. I had to point this out for others that are as obsessed as I am in the speed with which their lights turn on . We’re talking milliseconds! You are (all) very welcome! Now go use JSR223 for scripting your rules and you won’t have to bother with ScriptServiceUtil any more! In Rules DSL I’m kind of with you. I’m all in favor of adding lots of error checking to Rules. But I’m not seeing the use case. However, when you go to Rule Templates I can totally see where checking for the existence of an Item in the Rule would be more important. But I’ve no problem with using an exception in that case as appears to be the case now. An alternative is if (ScriptServiceUtil.getItemRegistry.getItems("item_name_to_check_for_existence").isEmpty) { That way you don’t have to create a new empty array list to compare to. .size == 0 would also work. How i solved it in the end can be found here, feel free to comment on it or even improove it Did you get something on the hunt? I’m having trouble getting a State from a DateTimeItem coming from the ScriptServiceUtil. In the rule I’ll loop through 6 different calendar entries and try to get their states (DateTimeItems and StringItems) (...) var DateTime EmailTime = ScriptServiceUtil.getItemRegistry?.getItem("Calendar_GoogleFamily_Event"+i+"_EmailTime") as DateTimeItem var String Summary = ScriptServiceUtil.getItemRegistry?.getItem("Calendar_GoogleFamily_Event"+i+"_Summary") as StringItem (...) logInfo("Familienkalender", "Item from DateItem-Definition: [{}]",Calendar_GoogleFamily_Event1_EmailTime) logInfo("Familienkalender", "State from Item-Definition [{}]",Calendar_GoogleFamily_Event1_EmailTime.state) logInfo("Familienkalender", "State from String-Definition [{}]",Calendar_GoogleFamily_Event1_Summary.state) logInfo("Familienkalender", "Item from DateScriptServiceUtil: [{}]",EmailTime) logInfo("Familienkalender", "State from DateScriptServiceUtil: [{}]",EmailTime.state) logInfo("Familienkalender", "State from StringScriptServiceUtil: [{}]",Summary.state) I’ll then get in the logs: 2020-01-10 08:26:00.0 08:26:00.050 [INFO ] [rthome.model.script.Familienkalender] - State from Item-Definition [2020-01-10T07:46:00.000+0100] 2020-01-10 08:26:00.055 [INFO ] [rthome.model.script.Familienkalender] - State from String-Definition [dingsbums anrufen] 2020-01-10 08:26:00.059 08:26:00.064 [ERROR] [ntime.internal.engine.ExecuteRuleJob] - Error during the execution of rule 'Binder Familienkalender': 'state' is not a member of 'org.joda.time.DateTime'; line 33, column 72, length 15 2020-01-10 08:23:03.134 [ERROR] [ntime.internal.engine.ExecuteRuleJob] - Error during the execution of rule 'Binder Familienkalender': 'state' is not a member of 'java.lang.String'; line 33, column 74, length 13 The items from the .items-file and the ones generated via ScriptServiceUtil seem to have the same attributes - but it seems I can’t access them… I’d also like to make comparisons with the “EmailTime”-Date and stuff I can with the .items-generated items… No. There is no way to access Item metadata from the rules DSL. The solution is to use scripted automation, which is the future of automation for OH and metadata can be easily accessed using the core helper libraries. My plan is to make this functionality available through a Scripting API, so that the core helper libraries are no longer needed. If you’re interested, it’s getting easier to get them setup… There are some issues with your casting. In the DSL, it is best to be only as specific as you need to be. Try being less specific and let the DSL do the work for you… var EmailTime = ScriptServiceUtil.getItemRegistry.getItem("Calendar_GoogleFamily_Event"+i+"_EmailTime") var Summary = ScriptServiceUtil.getItemRegistry?.getItem("Calendar_GoogleFamily_Event"+i+"_Summary") That’s great! at least I now get: 2020-01-10 10:18:02.7 10:18:02.750 [INFO ] [rthome.model.script.Familienkalender] - State from Item-Definition [2020-01-10T07:46:00.000+0100] 2020-01-10 10:18:02.752 [INFO ] [rthome.model.script.Familienkalender] - State from String-Definition [dingsbums anrufen] 2020-01-10 10:18:02.754 10:18:02.755 [INFO ] [rthome.model.script.Familienkalender] - State from DateScriptServiceUtil: [2020-01-10T07:46:00.000+0100] 2020-01-10 10:18:02.757 [INFO ] [rthome.model.script.Familienkalender] - State from StringScriptServiceUtil: [dingsbums anrufen] Do you have an idea, how I can compare the corresponding DateTime? val jetzt = now.minusSeconds(20) (line 37:) if (jetzt.isBefore(new DateTime((EmailTime as DateTimeType).getZonedDateTime.toInstant.toEpochMilli))) { (...) leads to: 2020-01-10 10:21:02.852 [ERROR] [ntime.internal.engine.ExecuteRuleJob] - Error during the execution of rule 'Binder Familienkalender': Could not cast Calendar_GoogleFamily_Event1_EmailTime (Type=DateTimeItem, State=2020-01-10T07:46:00.000+0100, Label=FamKal Ev1 Email, Category=calendar, Groups=[gCalendar, gCalFamilyNotification]) to org.eclipse.smarthome.core.library.types.DateTimeType; line 37, column 36, length 25 it seems, the “EmailTime”-defintion (without all those DateTime-references) gets the right values, but I’m still lacking understanding on how to cast accordingly… (I’m no devloper, casting and stuff is not in my blood ). The if-condition works with “real” items, as I use it elsewhere (also partly copied from the forum) EmailTime is an Item. EmailTime.state is a DateTimeType. Your issue is that you are using EmailTime instead of EmailTime.state. if (jetzt.isBefore(new DateTime((EmailTime as DateTimeType).getZonedDateTime.toInstant.toEpochMilli))) { This should be… if (jetzt.isBefore(new DateTime((EmailTime.state as DateTimeType).getZonedDateTime.toInstant.toEpochMilli))) { This is much simpler as… if (jetzt.isBefore(new DateTime(EmailTime.state.toString))) { Personally, I’d stop using Joda (removed in OH 3.0) and use ZoneDateTime… val jetzt = new DateTimeType().zonedDateTime.minusSeconds(20) if (jetzt.isBefore((EmailTime.state as DateTimeType).zonedDateTime)) { Thank you very much. Now it’s working as intended! I’ll have to get my head around all that time-stuff and casting and whatnot… I’d like to beta-test the Jython-thingy also, but first must move my OH2.5 to a new raspberryPi4… Can you use a ZoneDateTime in the call to createTimer? In OH 2.5, createTimer uses org.joda.time.AbstractInstant, so to use a ZonedDateTime… test_timer = createTimer(new DateTime(new DateTimeType().zonedDateTime.plusSeconds(5).toString()))[| Although, I bet you were asking if a ZDT could be fed to createTimer without using Joda, and the answer to that is no. The createTimer Action is part of the old rule engine, which will be removed in OH 3.0. Something similar will need to be created for the new rule engine, along with the other Core Actions. These will be made accessible for use within scripted automation through the scripting API. The mechanism will likely be a ScriptExtension, possibly through a default preset. My question was kind of self serving. Does it make sense to push people to move to ZonedDateTime before exceptionally commonly used Actions like createTimer can accept it? When I look at my use of Joda DateTime in all of my rules, about 80% of the uses are related to calls to createTimer. Thus, it doesn’t make sense for me to migrate until I can use a ZonedDateTime in createTimer. It will be more work for me to switch now and then again when the replacement for createTimer comes along. I imagine I’m not unique in that. Thanks to this post and little more digging, I’ve got this rule working. In short, it keeps an eye on gMinute and gHour groups, so that minutes and hours are constrained, but also that when the minutes exceed the limits, it adjusts the coresponidng hour I’ve left the logInfo lines in so you can see my working out. import org.eclipse.smarthome.model.script.ScriptServiceUtil rule "Confine Minute" when Member of gMinute changed then logInfo("Minute Limiting", "Minute Value of "+triggeringItem.name+" Currently "+triggeringItem.state) if (triggeringItem.state > 59) { sendCommand(triggeringItem.name,"0") // logInfo("Alarm Times",triggeringItem.name.split('Min').get(0)+"Hour")=1 + HourValue logInfo("Alarm Times","New Hour value is " + NewHour) sendCommand(HourName, NewHour.toString) } if (triggeringItem.state < 0) { sendCommand(triggeringItem.name,"45")=HourValue -1 logInfo("Alarm Times","New Hour value is " + NewHour) sendCommand(HourName, NewHour.toString) } end rule "Confine Hour" when Member of gHour changed then logInfo("Hour Limiting", "Hour Value of "+triggeringItem.name+" Currently "+triggeringItem.state) if (triggeringItem.state > 23) { postUpdate(triggeringItem.name,"0") // logInfo("Alarm Times",triggeringItem.name.split('Hou').get(0)) } if (triggeringItem.state < 0) { postUpdate(triggeringItem.name,"23") // logInfo("Alarm Times",triggeringItem.name.split('Hou').get(0)) } end For anyone who stumbles on this with OH3, replace the import with org.openhab.core.model.script.ScriptServiceUtil and this works again. Thank you… OP updated! How do you add the import org.openhab.core.model.script.ScriptServiceUtil instruction in OH3, when the Rule script is not in a file, but created using Admin > Settings > Rules > Execute a given script From Settings > Rules > Code I have triggers: - id: "5" configuration: cronExpression: 0 0 0/1 * * ? * type: timer.GenericCronTrigger conditions: [] actions: - inputs: {} id: "2" configuration: type: application/vnd.openhab.dsl.rule script: " [lines deleted ] " type: script.ScriptAction Can I add the import ... in the above? PS: adding the import line at the top of the script gives me this error: The method or field import is undefined; line 1, column 0, length 6 do you found the right solution for import in GUI-based rules? You cannot import in UI Rules DSL Script Actions and Script Conditions. You have to use the full name of the class you want to use. So instead of import java.util.Map val Map myMap = createHashMap you need to use the full class name everywhere you use the class you would have imported. val java.util.Map myMap = createHashMap
https://community.openhab.org/t/rules-dsl-get-item-from-string-name/48279/27
CC-MAIN-2021-21
refinedweb
1,996
51.85
Creating a new project This page is a guide to many of the beginning (and some intermediate) features of the creation and modification of a Code::Blocks project. If this is your first experience with Code::Blocks, here is a good starting point. Contents The project wizard Launch the Project Wizard through File->New->Project... to start a new project. Here there are many pre-configured templates for various types of projects, including the option to create custom templates. Select Console application, as this is the most common for general purposes, an click Go. Note: red text instead of black text below any of the icons signifies it is using a customized wizard script. The console application wizard will appear next. Continue through the menus, selecting C++ when prompted for a language. In the next screen, give the project a name and type or select a destination folder. As seen below, Code::Blocks will generate the remaining entries from these two. Finally, the wizard will ask if this project should use the default compiler (normally GCC) and the two default builds: Debug and Release. All of these settings are fine. Press finish and the project will be generated. The main window will turn gray, but that is not a problem, the source file needs only to be opened. In the Projects tab of the Management pane on the left expand the folders and double click on the source file main.cpp to open it in the editor. This file contains the following standard code. main.cpp main.cpp cout << "Hello world!" << endl; into a separate file. Note: it is generally improper programming style to create a function this small; it is done here to give a simple example. to (or exclusion from) the appropriate build target(s). In this example, however, the hello function is of key importance, and is required in each target, so select all the boxes and click Finish to generate the file. The newly created file should open automatically; if it does not, open it by double clicking on its file in the Projects tab of the Management panel. Now add in code for the function main.cpp will call. hello.cpp #include <iostream> using namespace std; void hello() { cout << "Hello world!" << endl; } Adding a pre-existing file Now that the hello() function is in a separate file, the function must be declared for main.cpp to use it. Launch a plain text editor (for example Notepad or Gedit), and add the following code. hello.h #ifndef HELLO_H_INCLUDED #define HELLO_H_INCLUDED void hello(); #endif // HELLO_H_INCLUDED Save this file as a header (hello.h) in the same directory as the other source files in this project. Back in Code::Blocks, click Project->Add files... to open a file browser. Here you may select one or multiple files (using combinations of Ctrl and Shift). (The option Project->Add files recursively... will search through all the subdirectories in the given folder, selecting the relevant files for inclusion.) Select hello.h, and click Open to bring up a dialog requesting to which build targets the file(s) should belong. For this example, select both targets. Note: if the current project has only one build target, this dialog will be skipped. Returning to the main source (main.cpp) include the header file and replace the cout function to match the new setup of the project. main.cpp #include "hello.h" int main() { hello(); return 0; } Press Ctrl-F9, Build->Build, or Compiler Toolbar->Build (button - the gear) to compile the project. If the following output is generated in the build log (in the bottom panel) then all steps were followed correctly. -------------- Build: Debug in HelloWorld --------------- Compiling: main.cpp Compiling: hello.cpp Linking console executable: bin\Debug\HelloWorld.exe Output size is 923.25 KB Process terminated with status 0 (0 minutes, 0 seconds) 0 errors, 0 warnings (0 minutes, 0 seconds) The executable may now be run by either clicking the Run button or hitting Ctrl-F10. Note: the option F9 (for build and run) combines these commands, and may be more useful in some situations. See the build process of Code::Blocks for what occurs behind the scenes during a compile. Removing a file Using the above steps, add a new C++ source file, useless.cpp, to the project. Removing this unneeded file from the project is straightforward. Simply right-click on useless.cpp in the Projects tab of the Management pane and select Remove file from project. Note: removing a file from a project does not physically delete it; Code::Blocks only removes it from the project management. Modifying build options Build targets have come up several times so far. Changing between the two default generated ones - Debug and Release - can simply be done through the drop-down list on the Compiler Toolbar. Each of these targets has the ability to be a different type (for example: static library; console application), contain a different set of source files, custom variables, different build flags (for example: debug symbols -p; size optimization -Os; link time optimization -flto), and several other options. Open Project->Properties... to access the main properties of the active project, HelloWorld. Most of the settings on the first tab, Project settings, are rarely changed. Title: allows the name of the project to be changed. If Platforms: is changed to something other than its default All, Code:Blocks will only allow the project to build on the selected targets. This is useful if, for example, the source code contains Windows API, and would therefore be invalid anywhere but Windows (or any other operating system specific situations). The Makefile: options are used only if the project should use a makefile instead of Code::Blocks' internal build system (see Code::Blocks and Makefiles for further details). Adding a new build target Switch to the Build targets tab. Click Add to create a new build target and name it Release Small. The highlight in the left hand column should automatically switch to the new target (if not, click on it to change the focus). As the default setting for Type: - "GUI application" - is incorrect for the HelloWorld program, change it to "Console application" via the drop-down list. The output filename HelloWorld.exe is fine except in that it will cause the executable to be output in the main directory. Add the path "bin\ReleaseSmall\" (Windows) or "bin/ReleaseSmall/" (Linux) in front of it to change the directory (it is a relative path from the root of the project). The Execution working dir: refers to where the program will be executed when Run or Build and run are selected. The default setting "." is fine (it refers to the project's directory). The Objects output dir: needs to be changed to "obj\ReleaseSmall\" (Windows) or "obj/ReleaseSmall/" (Linux) in order to be consistent with the remainder of the project. The Build target files: currently has nothing selected. This is a problem, as nothing will be compiled if this target is built. Check all the boxes. The next step is to change the target's settings. Click Build options... to access the settings. The first tab the comes up has a series of compiler flags accessible through check boxes. Select "Strip all symbols from binary" and "Optimize generated code for size". The flags here contain many of the more common options, however, custom arguments may be passed. Switch to the Other options sub-tab and add the following switches. -fno-rtti -fno-exceptions -ffunction-sections -fdata-sections -flto Now switch to the Linker settings tab. The Link libraries: box provides a spot to add various libraries (for example, wxmsw28u for the Windows Unicode version of the wxWidgets monolithic dll). This program does not require any such libraries. The custom switches from the previous step require their link-time counterparts. Add -flto -Os -Wl,--gc-sections -shared-libgcc -shared-libstdc++ to the Other linker options: tab. (For further details on what these switches do, see the GCC documentation on optimization options and linker options.) Virtual Targets Click OK to accept these changes and return to the previous dialog. Now that there are two release builds, it will take two separate runs of Build or Build and run to compile both. Fortunately, Code::Blocks provides the option to chain multiple builds together. Click Virtual targets..., then Add. Name the virtual target Releases and click OK. In the right-hand Build targets contained box, select both Release and Release small. Close out of this box and hit OK on the main window. The virtual target Releases will now be available from the Compiler Toolbar; building this should result in the following output. -------------- Build: Release in HelloWorld --------------- Compiling: main.cpp Compiling: hello.cpp Linking console executable: bin\Release\HelloWorld.exe Output size is 457.50 KB -------------- Build: Release Small in HelloWorld --------------- Compiling: main.cpp Compiling: hello.cpp Linking console executable: bin\ReleaseSmall\HelloWorld.exe Output size is 8.00 KB Process terminated with status 0 (0 minutes, 1 seconds) 0 errors, 0 warnings (0 minutes, 1 seconds)
http://wiki.codeblocks.org/index.php?title=Creating_a_new_project
CC-MAIN-2015-48
refinedweb
1,508
66.84
Beginning Visual CPP 2005 Express Part 3 - Online Article Creating CLR Console Application To create a new Project Step 1: On the File menu, point to New, and then click Project... Step 2: In the Project Types area, click CLR then in the Visual Studio installed templates pane, click CLR Console Application. Type the Project name as CLRConcoleapp. Step 3: Add a source code in CLRConsoleapp.cpp with in main() module. Console::WriteLine(L"Welcome to VC++ 2005 Express Edition"); To Build the application Click on build->build solution Output window will show the compiled solution. To Debug the application Click to Debug->Start without Debugging Now, Console Application exe appears Creating WIN32 Console Application To create a new project 1. On the File menu, point to New, and then click Project.... 2. In the Project Types area, click Win32, then in the Visual Studio installed templates pane, click Win32 Console Application. 3. Type consoleapp. 5. Select the Empty Project setting and click Finish. _10<< 2. Right-click on the Source Files folder in Solution Explorer and point to Add, then click Item. In the Visual C++ category, click to code and click C++ file(.cpp) Class in the Visual Studio installed templates area. Name the code as Welcome. Click Add. 3. A blank code window appears. Type the following code in the code window #include<iostream> using namespace std; int main() { char name[64]; cout<<"please enter your name"; cin>>name; cout<<endl<<endl; cout<<name<<" says, 'Hello! Welcome to VC++ ' "<<endl<<endl; return(0); } To Build the application 1. Click on build->build solution 2. Output window will show the compiled solution. To Debug the application 1. Click to Debug->Start without 2. Debugging Console Application exe appears. To Run 1. Type a name. 2. Let us say "SARA" and press enter. Following output is generated, SARA says, 'Hello! Welcome to VC++ ' About the Author: No further information. Related Online Articles: - C++ Coding Standards: Some Useful Tips - C++ Coding Standards: Naming Standardization - C++ Coding Standard: Exception Handeling - Beginning Visual CPP 2005 Express Part 2 - C Plus Plus Theory Questions Part 1 - C++ Coding Standard: Formatting & Indentaion - Beginning Visual CPP 2005 Express Part 4 - C++ Coding Standards: Process and Coding Pattern - C++ Coding Standards: Variables and Documentation - C++ Coding Standards: Classes and Methods
http://www.getgyan.com/show/160/Beginning_Visual_CPP_2005_Express_Part_3
CC-MAIN-2017-09
refinedweb
384
66.74
The module clnum adds arbitrary precision floating point and rational numbers to Python. Both real and complex types are supported. The module also contains arbitrary precision replacements for the functions in the standard library math and cmath modules. The clnum module uses the Class Library for Numbers (CLN) to do all of the hard work. The module simply provides a proper type interface so that the CLN numbers work with the standard Python arithmetic operators and interact properly with the built-in Python numeric types. To install the clnum module, you need to have some other components installed on your system. The GNU g++ compiler and standard library installed so that you can build C++ programs. The Python header files and the distutils package. CLN library and development files. On a Debian system, these are satisfied by installing the following packages: g++, libcln-dev, and python-dev. Other Linux systems will have similar packages. If your system does not have a CLN package, go to and download the source. Then follow the instructions for building CLN. Once you have the required components installed, installing the clnum module is simple. You need to download rpncalc from and unpack it. Go to the directory that was created and run the following command as root. python clnum_setup.py install --prefix=/usr/local Verify the installation by running the unit test (test_clnum.py). If you want information on customizing the install process, see the section titled "Installing Python Modules" in the standard Python documentation. A binary installer is now available for Windows thanks to the contribution of Frank Palazzolo. So all you need to get this module installed on Windows is to download and run the installer. If you want to build the source on Windows, see the Windows build instructions. The clnum package adds four numeric types to the standard set available in Python. The mpf type is an arbitrary precision floating point type. The mpq type is a rational type. The types cmpf and cmpq are complex versions of these types. There are two major categories of numbers, those that are exact, and those that are approximate. The exact numbers are from the set int, long, mpq, and cmpq. The approximate numbers are from the set float, complex, mpf, and cmpf. Whenever a calculation is performed with high-precision numbers, you want the result to reflect the precision used in all of the intermediate calculations. So if the result is exact, you want all of the intermediate calculations to be exact. This means that there can be no implicit conversions of approximate numbers to exact numbers. As a consequence, whenever mpq and cmpq types are used in an expression they can only be coerced to an approximate type not the other way around. Since the float and complex types do not recognize mpq and cmpq, an exception is raised whenever arithmetic between these types is attempted. If you perform mixed arithmetic on two approximate types, the precision of the result always decays to the precision of the value with the lowest precision. There is one exception to this rule. Python float and complex types use the C type double to represent the values. While the CLN library has types that encapsulate doubles, they can lead to non-recoverable aborts when mixed with the extended precision type. To avoid this problem, doubles are converted to an extended precision type with the lowest possible precision (approximately 17 decimal digits). The mpf and cmpf types have the following attribute. The complex types cmpq and cmpf have the following attributes and method. They behave the same as the corresponding attributes and method of the built-in complex type. In addition, the cmpf type has the following attribute. The mpq type has the following attributes.. Note that these functions can operate on both real and complex numbers. If the inputs are real, they work like the functions in the math module. If the inputs are complex, they work like the functions in the cmath module. Since there is no longer a fixed form for the constants e and pi, they become the functions exp1 and pi in the clnum module. These functions take the number of decimal digits as the parameter and return the constant with at least the requested precision. The parameter is optional and if not provided, the default precision is used. mand nmust be small integers >= 0. This function returns the binomial coefficient ( mchoose n) = m! / n! (m-n)! for 0 <= n <= m, 0 otherwise. cos(x)+1j*sin(x). nmust be a small integer >= 0. This function returns the doublefactorial n!! = 1*3*...*nor n!! = 2*4*...*n, respectively. nmust be a small integer >= 0. This function returns the factorial n! = 1*2*...*n. All of the examples assume the following import. from clnum import * >>> x = mpq(1,3); y = mpq(1,5); z = cmpq(mpq(2,3),mpq(3,5)) >>> print x+y 8/15 >>> print x-y 2/15 >>> print x*y 1/15 >>> print x/y 5/3 >>> print x+1 # Mix with integers. 4/3 >>> print x+z (1+3/5j) >>> print z**3 (-286/675+73/125j) >>> print abs(cmpf(z)), degrees(cmpf(z).phase) # Polar form of complex number 0.89690826980491402114 41.987212495816660054 >>> print hypot(z.real, z.imag) # Another way to compute abs(z) 0.89690826980491402114 >>> print get_default_precision() 26 >>> print repr(pi(40)) mpf('3.141592653589793238462643383279502884197169399376',46) >>> print repr(exp1(40)) mpf('2.7182818284590452353602874713526624977572470937',46) >>> def f(x): return x*x-2 >>> r = find_root(f, 1, 2, 1e-20) >>> print r 1.4142135623730950488 >>> print abs(r - sqrt(2)) 1.5692147344401202195e-24 >>> print ratapx(pi()) 355/113
http://calcrpnpy.sourceforge.net/clnum.html
crawl-002
refinedweb
940
66.94
Storage Classes in C++ Programming Storage class of a variable defines the lifetime and visibility of a variable. Lifetime means the duration till which the variable remains active and visibility defines in which module of the program the variable is accessible. There are five types of storage classes in C++. They are: - Automatic - External - Static - Mutable 1. Automatic Storage Class Automatic storage class assigns a variable to its default storage type. auto keyword is used to declare automatic variables. However, if a variable is declared without any keyword inside a function, it is automatic by default. This variable is visible only within the function it is declared and its lifetime is same as the lifetime of the function as well. Once the execution of function is finished, the variable is destroyed. Syntax of Automatic Storage Class Declaration datatype var_name1 [= value]; or auto datatype var_name1 [= value]; Example of Automatic Storage Class auto int x; float y = 5.67; 2. External Storage Class External storage class assigns variable a reference to a global variable declared outside the given program. extern keyword is used to declare external variables. They are visible throughout the program and its lifetime is same as the lifetime of the program where it is declared. This visible to all the functions present in the program. Syntax of External Storage Class Declaration extern datatype var_name1; For example, extern float var1; Example of External Storage Class Example 1: C++ program to create and use external storage. File: sub.cpp int test=100; // assigning value to test void multiply(int n) { test=test*n; } File: main.cpp #include<iostream> #include "sub.cpp" // includes the content of sub.cpp using namespace std; extern int test; // declaring test int main() { cout<<test<<endl; multiply(5); cout<<test<<endl; return 0; } A variable test is declared as external in main.cpp. It is a global variable and it is assigned to 100 in sub.cpp. It can be accessed in both files. The function multiply() multiplies the value of test with the parameter passed to it while invoking it. The program performs the multiplication and changes the global variable test to 500. Note: Run the main.cpp program Output 100 500 3. Static Storage Class Static storage class ensures a variable has the visibility mode of a local variable but lifetime of an external variable. It can be used only within the function where it is declared but destroyed only after the program execution has finished. When a function is called, the variable defined as static inside the function retains its previous value and operates on it. This is mostly used to save values in a recursive function. Syntax of Static Storage Class Declaration static datatype var_name1 [= value]; For example, static int x = 101; static float sum; 4. Register Storage Class Register Class Example Class Example b = 3
https://www.programtopia.net/cplusplus/docs/storage-classes
CC-MAIN-2019-30
refinedweb
473
56.55
import com.sleepycat.db.*; public int get(DbTxn txnid, Dbt key, Dbt data, int flags) throws DbException; public int pget(DbTxn txnid, Dbt key, Dbt pkey, Dbt data, int flags) throws DbException; The Db.get method retrieves key/data pairs from the database. The byte array interface will always fail and return EINVAL. If the operation is to be transaction-protected, the txnid parameter is a transaction handle returned from DbEnv.DB_GET_BOTH flag with the Db.get version of this interface and a secondary index handle. The data field of the specified key must be a byte array large enough to hold a logical record number (that is, an int). This record number determines the record to be retrieved. For Db.DB_MULTIPLE flag may only be used alone, or with the Db.DB_GET_BOTH and Db.DB_SET_RECNO options. The Db.DB_MULTIPLE flag may not be used when accessing databases made into secondary indices using the Db.associate method. See DbMultipleDataIterator for more information. Because the Db.get interface will not hold locks across Berkeley DB interface calls in non-transactional environments, the Db.DB_RMW flag to the Db.get call is meaningful only in the presence of transactions. If the database is a Queue or Recno database and the specified key exists, but was never explicitly created by the application or was later deleted, the Db.get method returns Db.DB_KEYEMPTY. Otherwise, if the specified key is not in the database, the Db.get function returns Db.DB_NOTFOUND. Otherwise, the Db.get method throws an exception that encapsulates a non-zero error value on failure. The Db.get method may fail and throw an exception encapsulating a non-zero error for the following conditions: A record number of 0 was specified. The Db.DB_THREAD flag was specified to the Db.open method and none of the Db.DB_DBT_MALLOC, Db.DB_DBT_REALLOC or Db.DB_DBT_USERMEM flags were set in the Dbt. The Db.pget interface was called with a Db handle that does not refer to a secondary index. If the operation was selected to resolve a deadlock, the Db.get method will fail and throw a DbDeadlockException exception. If the requested item could not be returned due to insufficient memory, the Db.get method will fail and throw a DbMemoryException exception. The Db.get method may fail and throw an exception for errors specified for other Berkeley DB and C library or system methods. If a catastrophic error has occurred, the Db.get method may fail and throw a DbRunRecoveryException, in which case all subsequent Berkeley DB calls will fail in the same way.
http://doc.gnu-darwin.org/api_java/db_get.html
CC-MAIN-2018-51
refinedweb
432
60.01
Project Metamorphosis: Unveiling the next-gen event streaming platform. Learn More When building API-driven web applications, there is one key metric that engineering teams should minimize: the blocked factor. The blocked factor measures how much time developers spend in the following situations: Ever spend a week unable to work on a project because the backend REST endpoints aren’t ready yet? That goes into the blocked factor. What if you only spend a day bootstrapping your local environment by hard-coding responses in a mock server? Well, that counts too. When working with upstream dependencies, it’s not realistic to completely avoid such situations. Multi-team software engineering can get complicated, and there are many factors out of our control. Still, we should do our best to keep the blocked factor to a minimum, because one thing is certain: wasting time is bad. At Confluent, we created Mox to help us mitigate the blocked factor. Mox is a lightweight combination of a proxy server and a mocking framework that you can use to proxy your endpoints that work and mock the ones that don’t. To understand how it works and whether it applies to your engineering workflow, we’ll deep dive into how UI teams typically work with external APIs. In software design, the flexibility-usability tradeoff states that the easier something is to use, the less flexible it is. This means that systems with broad applicability tend to be more difficult or complicated to use than systems that focus on just one function. A Swiss Army knife is flexible since it serves many uses, but is more difficult to use than a simpler tool like a screwdriver. Modern web frameworks like React or AngularJS are great examples of this principle. Compared to using pure JavaScript, these frameworks are opinionated about how developers should interact with them. This makes them less flexible than arbitrary JavaScript, but also far more usable for the right use cases. Engineering teams face this trade-off when deciding how to integrate external APIs into their development workflow. For the most part, there are two common approaches, each on one end of the flexibility-usability scale. A flexible API mock approach requires a framework for simulating a backend API, which involves specifying endpoint routes (e.g., GET /api/status) as well as response behaviors (e.g., response payload and status code). Mock frameworks typically come with a server implementation that allows them to easily run in a local environment. Here are some advantages to using API mocks in your development workflow: You may notice that many of the advantages of mocking API dependencies are related to flexibility. Mocking offers valuable agility when it comes to working with external dependencies since it places their behavior in your control. However, with great power comes great responsibility. Here are some disadvantages to consider when mocking your APIs: How do mocks affect our blocked factor? Mock servers give us an outlet to convert passive time spent being blocked into active time spent unblocking ourselves. This is usually a good trade because API mocks generally don’t take much time to write compared to how long we might have to wait to get working backend endpoints. The downside is that API mocks can require additional, ongoing effort to maintain even long after their associated feature is released. In lieu of mocking your external dependencies, you can use them outright. Cloud-based engineering organizations usually have the concept of dev, staging, and production environments. Here, using a live backend usually involves pointing a local frontend application at the hosted dev environment. Using an external backend is pretty straightforward. All you have to do is plug it into your local build. It may require work to integrate your codebase with a real API, but that isn’t exactly a disadvantage. You would need to do it regardless of your development workflow. However, there is a massive flexibility downside to this approach. Completely relying on a hosted backend for development can be problematic since your dev environment is now externally dependent. For example: As far as our time spent blocked is concerned, the costs are pretty simple. We don’t have the recurring maintenance costs of the mock server workflow, but we spend time either being blocked or unblocking ourselves whenever a backend service is unavailable. There are definite trade-offs between the two workflows. Is there a path down the middle where we get both flexibility and maintainability? This was a question that our team needed to answer at Confluent. When we began work on the Confluent Cloud UI in late 2017, we went with a mock server for feature development. At this time, our engineering organization was figuring out how to improve stability and release speed, which meant we couldn’t always rely on having a stable or up-to-date development environment. Because of this, our mock server approach was necessary and effective at first. Soon though, we shifted from using our API mocks to building against our dev environment. By early 2019, our stability and release cadence had greatly improved. There was little incentive to maintain our mock server on a day-to-day basis, so it gradually became outdated. This was a problem when it came time to work on large features or complicated API version upgrades. Each time, we would roll up our sleeves and perform the tedious task of updating the mock server. Frequently, this involved maintaining mocks for APIs that weren’t at all related to the project in question. Once updated, the mock server would again start accumulating dust until the next cleaning was required. As our application expanded from a few endpoints to a few dozen, the cost of maintaining the mocks became prohibitive. It was simply too time consuming to keep the mock server up to date, at all times, across API versions. Our team’s velocity was being dragged down by the overhead cost of our never-ending game of mock server hot potato. To address this, we formed an intermediate strategy that uses both mock APIs and our dev backend. We lean on our live API for stable features while leveraging the flexibility of mocks as needed. Our local frontend sends most of its API requests to our cloud but is served mocks as needed. It looks something like this: Most API requests go to the backend, while others are served mocks. The concept of extending real behavior with mocked behavior is probably already familiar. It’s a common theme in software engineering, and it’s the same principle behind hardcoding a return statement somewhere in your code and letting the program run to see what happens. For my team, this strategy means that we don’t need to write or update mocks unless they are directly involved in the feature being developed. We are empowered to unblock ourselves against external API dependencies and only spend time doing so when truly necessary. So, this is our high-level strategy, but it turns out we have a number of logistical questions to answer. How do we actually get this to work in practice? There are a number of ways we can support a dual mock and backend workflow. The right approach for each team depends on its development process and workflow. Here are some options we found: In addition to the above, there’s another option: Software engineers often joke about not invented here syndrome. In spite of this, we ended up building our own solution at Confluent. We felt the existing tools either weren’t flexible enough to fit our workflow or weren’t usable enough out of the box. After some experimentation and iteration, we created Mox. Mox is a lightweight request interception library that wraps around Express. It acts like a proxy server that can be extended with mocked API endpoints. By default, Mox proxies incoming API requests to your real backend. However, you can specify endpoints to mock or modify. Here’s a shortlist of features: Mox combines the pattern-based routing of Express with a chainable interface that covers common developer use cases. By sitting between your local frontend and your backend service, Mox allows you to specify what requests pass through, where they are passed to, and how they are modified or mocked. Mox inspects requests and can reroute them or modify their responses. Here’s a look at what it takes to configure the server, perform a simple mock, and get it running: import { MoxServer } from '@confluentinc/mox'; const server = new MoxServer({ targetUrl: '', listenPort: 3005 }); const router = server.getRouter(); router.get('/api/route-to-mock').mock({ foo: 'bar' }); router.start(); In this example, any GET request matching /api/route-to-mock will receive a response with the supplied mock value, and all other requests will get proxied through to. Mocks can be defined inline as shown above, but if you already have a mock server available, you can tell Mox to redirect requests there. router.all('/api/new-feature/*').setBase(''); // url of mock server Mox’s most obvious use case is unblocking new feature development. This was our main goal when we started working with it as well. As time went on though, we discovered that Mox’s ability to intercept network requests had a number of other applications. When investigating a bug, you sometimes have access to an API response that triggers it. In these situations, Mox can consistently reproduce that behavior. const response = { foo: ‘bar’ }; /* or whatever you copy-pasted */ router.get('/api/feature).mock(response); Sometimes, you just want to change a part of a response payload without having to mock the entire thing. This can happen when a queried resource contains fields that might be difficult to manipulate organically. router.get('/api/metrics/:id').mutate(resp => { resp.data_throughput = 4.2e10; return resp; }); Mox can change the response payload in flight. This is especially helpful when working with APIs that contain dense or complex payloads but only require a small part of the response behavior to be modified. Instead of recreating the entire payload, just change the part that matters! Development, staging, and production environments can vary in their timing characteristics. With Mox, you can simulate lag for different endpoints and see how the UI responds. router.all('/api/slow-service/*').delay(1000); It’s sometimes useful to load test your UI to see how well it handles large responses. For example, you might have an API that returns a list of resources that you want to render, and you want to see how the UI responds to a large list. It’s easy to manipulate that API with Mox. router.get('/api/items').mutate(itemArray => { return [...itemArray, Array(100).fill(itemArray[0])]; }); For more examples and full documentation, you can visit GitHub. Let’s revisit our key metric from before. The blocked factor consists of time spent in the following situations: How does setting up our workflow with a hybrid approach help us avoid these? To start with, we write mock APIs to unblock ourselves when we don’t have real ones available. Once a service goes live, we don’t need the mocks anymore, and thus we don’t spend time maintaining them. Of course, backend interfaces can still change, requiring fixes to the frontend codebase. Doesn’t this mean we need to maintain our API mocks anyway? The answer is sometimes, but not always. Keep in mind: The hybrid mocking strategy doesn’t fix everything, but overall, it cuts down a lot of time spent working on or maintaining mock APIs. There are many ways to implement the hybrid approach, but the best option depends on your team’s workflow and situation. With the right development strategy, you can have control over where and how you use mock APIs within your workflow. The flexibility-usability tradeoff doesn’t imply that you must stick to one extreme or the other—it just means that you don’t get both sides for free. With the right tooling though, you get to choose where to spend effort and where not to, for every API that you depend on. If you think Mox might be of help in your workflow, try it out! It’s easy to set up hybrid API proxying, and it comes with other powerful abilities including request interception. Alex Liu is a full stack software engineer at Confluent. He has spent the last few years building data-focused enterprise products. He has a special interest in developer expressibility and is a firm believer that good implementation follows good design.
https://www.confluent.io/blog/choosing-between-mock-api-and-real-backend/
CC-MAIN-2021-17
refinedweb
2,102
54.42
Farid Hajji <farid.hajji@ob.kamp.net> writes: > > > > just wanted to tell you, that the temperature of my cpu is raising to > > > > very high values, while having the hurd system running. > > > > > > > > If I am working with linux, the temperature of the cpu is somewhere > > > > below 600C (under heavy load), but if I boot the hurd thing and don't do > > > > anything, the temperature raises to higher 60's and the bios temperature > > > > warning starts howling. > > > > > > This is intended behaviour: GNU/Hurd is a *hot* OS :-) > > > > > > > Did anyone else recognize this problem? > > > > > > Maybe GNUmach should idle/hlt the CPU when all tasks/threads > > > are idle? > > > > This was sort of brought up a while ago with respect to a laptop. > > Mach actively sends idle instructions to the cpu, so the cpu is always > > at full usage, this may be causing the heat up. > > Exactly. On x86, sending a hlt instruction will block the CPU > until an interrupt trickles in (e.g. hw clock, etc...). > > I don't have GNUmach sources on the box I'm posting from right now, > so here's how it is done on FreeBSD -STABLE: > > /usr/src/sys/i386/i386/machdep.c: > > /* > * Shutdown the CPU as much as possible > */ > void > cpu_halt(void) > { > for (;;) > __asm__ ("hlt"); > } > > /* > * Hook to idle the CPU when possible. This is disabled by default for > * the SMP case as there is a small window of opportunity whereby a ready > * process is delayed to the next clock tick. It should be safe to enable > * for SMP if power is a concern. > * > * On -stable, cpu_idle() is called with interrupts disabled and must > * return with them enabled. > */ > #ifdef SMP > static int cpu_idle_hlt = 0; > #else > static int cpu_idle_hlt = 1; > #endif > SYSCTL_INT(_machdep, OID_AUTO, cpu_idle_hlt, CTLFLAG_RW, > &cpu_idle_hlt, 0, "Idle loop HLT enable"); > > void > cpu_idle(void) > { > if (cpu_idle_hlt) { > /* > * We must guarentee that hlt is exactly the instruction > * following the sti. > */ > __asm __volatile("sti; hlt"); > } else { > __asm __volatile("sti"); > } > } > > cpu_halt() and cpu_idle() are called from the scheduler in > /usr/src/sys/i386/i386/swtch.s, but that's off topic here, > so no need to reproduce that code. > > Anyone with GNUmach sources available: could you please > post the code of the idle loop? A quick fix for single > CPU systems should be possible and painless, though there > _will_ be issues in the SMP case. In oskit/x86/main.c: void machine_idle (int mycpu) { asm volatile ("hlt" : : : "memory"); } Same for GNUMach 1.3. In the GNUMach 1.3 changelog I see this function was added a few days before 1.3 was released. Perhaps Jochen is using GNUMach 1.2? Thanks, Marco
https://lists.debian.org/debian-hurd/2003/07/msg00153.html
CC-MAIN-2017-17
refinedweb
429
71.85
SeleniumCheckSettings class Platform: Selenium 3Language: Java SDK: The methods in this class are used as part of the check Fluent API to configure and execute checkpoints. To use these methods, first create a target object using a method from the Target class, then call one or more of the methods in this class on the returned object, chaining them one after the other using the '.' operator. Import statement import com.applitools.eyes.selenium.fluent.SeleniumCheckSettings; Methods - accessibility() - Use to define an accessibility region and its type. - beforeRenderScreenshotHook() - Use this to supply a JavaScript snippet that should be executed on the Ultrafast Grid before the DOM is rendered. - content() - If called without parameters, sets the match level for this target to . Otherwise, defines that a match level of CONTENT should be used for the regions passed as a parameters. - floating() - Add one more floating region to this target. - fully() - Defines if the screenshot for this target should be extended to contain the entire element or region being checked, even if it extends the borders of the viewport. - ignoreCaret() - Use this method to tell Eyes that for this target it should detect mismatch artifacts caused by a blinking cursor and not report them as mismatches. - ignoreDisplacements() - Sets whether Test Manager should intially display mismatches for image features that have only been displaced, as opposed to real mismatches. - layout() - If called without parameters, sets the match level for this target to LAYOUT. Otherwise, defines that a match level of LAYOUT should be used for the regions passed as parameters. - layoutBreakpoints() - Configure the SDK to capture multiple DOM images for multiple viewport sizes. - matchLevel() - Use this method to set the default match level (the type of matching) to use for this target when matching the captured image to the baseline image. - scrollRootElement() - Normally, Eyes will select the most appropriate element to scroll to execute the fully method. You can use the scrollRootElement method to specify the element to scroll explicitly. - strict() - If called without parameters, sets the match level for this target to STRICT. Otherwise, defines that a match level of STRICT should be used for the regions passed as a parameters. - variationGroupId() - Set the variation group ID for this checkpoint. - visualGridOptions() - Use this method to set configuration values for the Ultrafast Grid. - withName() - Assigns a name to the checkpoint.
https://applitools.com/docs/api/eyes-sdk/index-gen/class-checksettings-selenium-java.html
CC-MAIN-2022-05
refinedweb
386
53.92
Earlier I discussed how you could manually get the software keyboard (SIP) to display whenever a TextBox control gained focus. There was potentially a lot of event handlers to write, two for every control on a form. Today I will show you an alternative approach that utilises less code but also has some additional benefits. A common thing I do while testing new Windows Mobile applications, is to tap-and-hold on a text field. Very well behaved applications should popup a context sensitive menu containing cut/copy/paste style options for the current control. Surprisingly, very few applications actually pass this test, even though it is a feature that has been built into the operating system for a while. Within Windows CE the SIPPREF control can be used to automatically implement default input panel behavior for a dialog. It provides the following features: - The Input Panel is automatically shown/hidden as controls gain and loose focus. - Edit controls have an automatic context menu with Cut, Copy, Paste type options. - The SIP state is remembered if the user switches to another application and later returns to this form. In my mind this has three advantages over the process I previously discussed. - You add the SIPPREF control once and it automatically hooks up event handlers for each of your controls. With the manual event handler approach it’s easy to add a new control and forget to hook up the events required to handle the SIP. - You get free localisation. Although you could create a custom context menu for cut/copy/paste, you would need to localise the text into multiple languages yourself (if you are concerned with true internalisation that is) and it’s another thing thing to hook up for each control. - You get standardised behavior. By using functionality provided by the operating system you are ensuring that your application has a natural and expected behavior to it. If the platform ever changes the conventions of SIP usage, your application will automatically be updated. For the rest of this blog entry I will discuss how to go about utilising the SIPPREF control within your application. I have split my discussion into two sections. The first section will be of interest to native developers developing in C or C++, while the second section is intended for .NET Compact Framework developers. Each section contains a small example application which demonstrates the behaviour of the SIPPREF control within the respective environment. When you run the sample applications you will notice that the SIP does not popup up when you click within a text box. And a tap-and-hold operation yields nothing. This behaviour changes when a SIPPREF control is added to the dialog, which can be achieved by clicking the sole button. Native Developers [Download sipprefcontrolnativeexample.zip 16KB] In order to use the SIPPREF control we must first request the operating system to register the SIPPREF window class. We do this by calling the SHInitExtraControls function. This step only needs to be done once, so is typically done during your application’s start up code. It is very easy to call, as the following example demonstrates: #include <aygshell.h> SHInitExtraControls(); Since SHInitExtraControls lives within aygshell.dll, we also need to modify our project settings to link with aygshell.lib, otherwise the linker will complain that it can not find the SHInitExtraControls function. Once we have registered the SIPPREF window class, we simply create a SIPPREF control as a child of our dialog. When the SIPPREF control is created it will enumerate all sibling controls and subclass them in order to provide the default SIP handling behaviour. The SIPPREF control must be the last control added to your dialog, as any controls added after the SIPPREF control will not be present when the SIPPREF control enumerates its siblings, and hence will not be subclassed to provide the proper SIP handling. If dynamically creating the SIPPREF control, a good place to do this is within the WM_CREATE or WM_INITDIALOG message handler, as the following code sample demonstrates: case WM_INITDIALOG: // Create a SIPPREF control to handle the SIP. This // assumes 'hDlg' is the HWND of the dialog. CreateWindow(WC_SIPPREF, L"", WS_CHILD, 0, 0, 0, 0, hDlg, NULL, NULL, NULL); As an alternative to dynamically creating the SIPPREF control, we can place the control within our dialog resource by adding the following control definition to the end of a dialog within the project’s *.rc file. CONTROL "",-1,WC_SIPPREF, NOT WS_VISIBLE,-10,-10,5,5 Depending upon your developer environment you may even be able to do this entirely from within the Resource Editor GUI. For example within Visual Studio 2005 you could drag the “State of Input Panel Control” from the Toolbox onto your form to cause a SIPPREF control to be added. .NET Compact Framework Developers [Download sipprefcontrolexample.zip 16KB] The process of using the SIPPREF control for a .NET Compact Framework application is fairly similar to that of a Native application. Since the .NET Compact Framework does not natively support the use of dialog templates, we must use the CreateWindow approach to create a SIPPREF control dynamically at runtime. The first step is to declare a number of PInvoke method declarations for the various operating system APIs we need to call. using System.Runtime.InteropServices; [DllImport("aygshell.dll")] private static extern int SHInitExtraControls(); [DllImport("coredll.dll")] private); private static readonly string WC_SIPPREF = "SIPPREF"; private static readonly uint WS_CHILD = 0x40000000; One interesting fact is that we PInvoke a function called CreateWindowEx, while the native example above called CreateWindow. If you dig deeper into the Window Header files you will notice that CreateWindow is actually a macro which expands into a call to CreateWindowEx. At the operating system level the CreateWindow function doesn’t actually exist. With this boiler plate code out of the way, the solution is very similar to the native one… protected override void OnLoad() { // Initialise the extra controls library SHInitExtraControls(); // Create our SIPPREF control which will enumerate all existing // controls created by the InitializeControl() call. IntPtr hWnd = CreateWindowEx(0, WC_SIPPREF, "", WS_CHILD, 0, 0, 0, 0, this.Handle, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero); } In the above example we simply create the SIPPREF control within the OnLoad method of our form. Within the downloadable sample project I have wrapped up this code into a static method called SIPPref.Enable(Form f) to enable to it easily be reused between forms. Hopefully today I have shown you that having knowledge of the underlying operating system is still a useful skill to have for .NET Compact Framework developers. Knowing the features provided by the operating system can allow you to add some neat functionality to your applications with little additional effort on your behave. Hi, unfortunately it doesn’t works on TextBoxes with non empty PasswordChar. (at least on WM6). Textbox doesn’t repaint. Do you have ideas how to fix it? It basically works from .NET CF via P/Invoke. But I discover a few things (1) If a panel is used and the textbox is inside that Panel, the handle of the panel (and not the form) needs to passed to hWndParent in the call to CreateWindowEx. If the form handle is to be passed, no context menu will appear (2) Context menu won’t appear on textboxes which are added to the form programmatically after the call to CreateWindowEx. Attempting to call CreateWindowEx again on the same form handle will hang the application! (3) If an InputPanel is used to resize the form when the SIP appears, the context menu will disappear after a while. I am not sure exactly what triggers this, but I suspect it stops working either when the SIP is raised, or when the form is resized. (4) Like Alex Prozor said, this will not work on a password textbox. When I tried this, my textbox is repainted properly but no context menu appears. Any idea how to fix this? Thanks Hello, thank you for the code. But I would like to comment that this code only adds the copy/cut/… context menu to the controls that are added directly in the form not any textBox added to a Panel or other containers. So you have to do the same call with all the panel that have controls: IntPtr hWnd = CreateWindowEx(0, WC_SIPPREF, “”, WS_CHILD,0, 0, 0, 0, panel1.Handle, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero); IntPtr hWnd = CreateWindowEx(0, WC_SIPPREF, “”, WS_CHILD,0, 0, 0, 0, panel2.Handle, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero); … and so on. By the way, congratulations by the blog. this code works fine for me to add cut paste functionality to edit control but this code also starts controlling sip-keyboard, which is undesired for me ,Is there any way by which I can add cut-paste functionality but control for sip-keyboard remains with me. Does the new hWnd handle need to be released after the form is closed?
http://www.christec.co.nz/blog/archives/146
CC-MAIN-2018-09
refinedweb
1,489
53.51
Opened 5 years ago Last modified 9 months ago #3901 enhancement new deferToProcess Description Create an implementation of the threadpool and deferToThread using the multiprocessing library instead of the threading library. There is a backport of the multiprocessing library currently available from Change History (16) comment:1 Changed 5 years ago by exarkun - Cc exarkun added - Owner changed from glyph to benliles comment:2 Changed 5 years ago by glyph Someone interested in this ticket should also be aware of Ampoule. In "normal" python programs, multiprocessing often appears to work, but in Twisted's case, the undocumented serialization algorithm and poor error-reporting behavior are likely to be highly confusing, since un-serializable things like sockets are lying around all over the place. Consider the output of this program: from multiprocessing import Process, Queue from socket import socket def f(q): print 'QUEUE PUT' skt = socket() q.put(skt) print 'PUTTED!' def go(): q = Queue() proc = Process(target=f, args=(q,)) proc.start() skt = q.get() print 'skt', skt go() So, personally I believe that something like Ampoule which has very clear, explicit serialization rules would be a better basis for deferToProcess than multiprocessing, even if it's a bit more work to define the interface between processes. The other thing about Ampoule is that it opens the door to the possibility of spreading the process pool around multiple hosts. But thanks for filing the ticket! comment:3 Changed 5 years ago by benliles Ampoule is a bit more powerful and requires more code changes than we were willing to do. The idea was to have something that was as close to be directly replaceable as possible. Along those lines, I've been working on. It provides a deferToProcess function. I haven't written tests for it yet, but I will as soon as I have time and will close the ticket at that point. comment:4 Changed 5 years ago by exarkun (having a branch author when there is no branch doesn't make sense, so removing that field) Ampoule is a bit more powerful and requires more code changes than we were willing to do. The idea was to have something that was as close to be directly replaceable as possible. Can you elaborate on this a bit? What API are you porting from? What differences in Ampoule make using it too much work? I haven't written tests for it yet, but I will as soon as I have time and will close the ticket at that point. If you want to propose this code for inclusion in Twisted, I cannot overemphasize the need for comprehensive test coverage. Ideally, you would be developing all of this code in a test-driven manner so that you never introduce any logic without first writing tests for it. Dealing with processes is not entirely trivial, so this is all the more important. (Also, somewhat less interestingly, but unfortunately just as importantly, it would be best if any code you intend to contribute to Twisted were MIT/X licensed.) If you're not intending to contribute this code for inclusion in Twisted, then renaming it would be a good idea. Also, in that case, the code, even when complete and perfect, probably cannot resolve this ticket. Having only skimmed the implementation, I can't say I have a complete grasp of the code in question, so my next question may have an obvious answer, which I hope you will feel free to point out to me. If the API of Ampoule is what causes problems for you, then perhaps you can create the API you desire but implement it in terms of Ampoule? Hopefully this will remove most of the non-trivial implementation work from your code while still resolving your issue. comment:5 Changed 5 years ago by truekonrads - Cc truekonrads added I did some work on a similar topic with pb and reactor.spawnProcess a while ago. Approach I used was to execute a stand-alone twisted application and communicate with (can't remember now exactly) pb or plain LineReceiver. This brutish approach worked and didn't have serialization problems. The expected usage would be # myapp.py from twisted.internet.defer import succeed from twisted.someplace.processhandler import otherEnd def doSomething() a=complexComputation() return succeed(a) if __name__=="main": otherEnd(doSomething) # # and on the spawning end d=reactor.deferToProcess('my.project.myfile.py') comment:6 Changed 5 years ago by exarkun This probably shouldn't be exposed as a method of the reactor. Instead, there should be some other object which wraps the reactor and provides the feature. There may be multiple implementations, too. For example, a pool-based implementation which re-uses processes for multiple calls; or an implementation that runs processes on another host via SSH. I think Ampoule is more like this. comment:7 Changed 5 years ago by truekonrads Pondering about this, a realistic approach would be to go for limited data passing between process. Simple serialization and separation: - subprocess can't access data/methods via this interface on parent. - If it can't be passed via AMP/pb then it can't. No sockets, C objects, etc. Pure Python objects. I see this as a 100 line thing. Probably not part of reactor.deferToProcess but rather twisted.multiprocessing If it will use pb, then I think spawning on different hosts shouldn't be much of a problem. I could try to tackle this if we'd have a definitive design. comment:8 Changed 5 years ago by exarkun Please look at Ampoule. comment:9 Changed 5 years ago by darkporter All processors today are multicore, why have this functionality in a separate project (Ampoule)? I'm envisioning something simple: # In same process reactor.listenTCP(1234, MyFactory()) # Forwarding to a pool of processes reactor.listenTCP(1234, MyFactory(), loadBalanceOverAllCores=True) or reactor.listenTCPAndLoadBalance(1234, MyFactory(), stickyByIP=True, distribution=RoundRobin) Or this information could be in the factory: class MyFactory(protocol.Factory): def __init__( self ): self.protocol = MyProtocol self.multiprocess = OneForEachCore Or maybe there's a subclass of Factory called MultiprocessFactory. I don't really understand the discussion about non-picklable data being a problem. The only data being sent would be arguments to dataReceived, connectionMade, etc. The only thing they need are bytes from the socket and integer socket descriptors right? comment:10 Changed 5 years ago by exarkun Hi darkporter, I'm glad you're interested in this. The functionality is currently in a separate project because this was a convenient way to develop it. The primary author has expressed interest in moving the functionality into Twisted, though, and there is no fundamental reason why this shouldn't happen. It's mainly a question of effort. I suggest you get in contact with him (dialtone) to discuss how you can help with the issue. I'm not intimately familiar with the Ampoule code, so I can't say it is ready for inclusion as-is. Identifying what remains to be done to make it suitable for inclusion is probably the first task to knock down (unless someone has done it already - the place to check for that would be the Ampoule issue tracker, though). There are also some known bugs in Ampoule that it would make sense to fix before trying to include the code in Twisted. As for your listenTCP API suggestions, I don't think these are really related to the feature this ticket is describing. The deferToProcess API serves a distinct use case. Changes to listenTCP serve a different set of use cases and should probably be addressed separately. Feel free to raise these ideas on the mailing list (I think you'll get some push back, but the discussion should hopefully result in something that satisfies your requirements and ours :). Thanks again for getting involved! :) comment:11 Changed 5 years ago by darkporter You're right -- deferToProcess is a general solution for moving work to another process. What I'm thinking of rather is to make the main reactor loop "core aware" and smart enough to spawn subprocesses on each core (which are then load balanced over). I'll raise the issue on the mailing list. comment:12 Changed 4 years ago by detly - Cc jason.heeris@… added comment:13 Changed 3 years ago by drednout - Cc drednout.by@… added comment:14 Changed 2 years ago by therve I'll mention here the new multi process capability of trial, in twisted.trial._dist, which can be a based for this new mechanism, and/or use this new mechanism once in place. comment:15 Changed 11 months ago by cengizkrbck - Cc karabacakcengiz@… added comment:16 Changed 9 months ago by jmehnle - Cc julian@… added This ticket could be improved in several ways: Taking advantage of multiple processors is an oft requested feature, so the spirit of this ticket is not at all misplaced! However, it is not really actionable as-is. Please clarify it with respect to the above mentioned points. Thanks!
http://twistedmatrix.com/trac/ticket/3901
CC-MAIN-2014-52
refinedweb
1,500
55.34
Best Language for .Net? 29 Aug This guy wants to know which is better: C# or VB.NET. I think we can provide an alternative choice, can’t we? Share This | Email this page to a friend This guy wants to know which is better: C# or VB.NET. I think we can provide an alternative choice, can’t we? Share This | Email this page to a friend © 2009 Nick Hodges | Entries (RSS) and Comments (RSS) Done!August 29th, 2006 at 8:42 am You mean C++/CLI, right?August 29th, 2006 at 11:31 am Sorry it took me so long to comment back, Steve, but I had to re-attach my ass, as I had laughed it off. NickAugust 29th, 2006 at 11:32 am C#, no doubt. But if we speak about native RAD development…August 29th, 2006 at 11:47 am The best language for .Net will come soon to earth after a nice and High Landing.August 29th, 2006 at 12:00 pm I found the following response to be interesting :- "I started .NET development as a C# programmer and wrote C# code for 4 years. I did a little VB.NET but really hated the language style. Today Delphi is my language of choice for Win32 and .NET development." - Kirby Turner () It’s not often you hear of people starting out their .NET development in C# or VB.NET then migrating to and preferring Delphi for .NET. It would be very interesting to hear exactly what Kirby preferred about Delphi for .NET, and I could well imagine his story would sit nicely on marketing material too.August 29th, 2006 at 1:08 pm Not Delphi for sure :-(. I recomend against it on big projects after using it for the last two+ years.August 29th, 2006 at 3:16 pm Borland (DevCo) need to raise the bar IDE (stability and peformance) and version control integration wise. 7 below par, 8 crap and waste of money, 2005 did not use extensively but little better than 8, 2006 are using and on par with 7 so there is a glimmer of hope. But where is .Net 2.0 support, and delegates? I want to use Generics too! Come on DevCo! C# was the language dotNet was designed around and for. VB.Net was adapated to it, making it a runner up - it has all the features, but carries only old problems and baggage. Delphi was also, poorly, adapted to it. Yes, I said poorly. Don’t bother attacking me for it, the Delphi adapatation has numerious weaknesses from namespace implementation to version support. The version support alone removes Delphi as a contentor completely when you evaluate what language is best for dotNet. Sadly, the roadmap as laid out so far does nothing to address these problems. So, if you want to keep using the OLD framework, Delphi.net will certainly get you there. If you want to use the newest framework and the newest features go get C# or VB.Net from MS, because you can’t get them from BDS. Kinda sucks as an advertizing statement for DevCo doesn’t it?August 29th, 2006 at 3:59 pm C Johnson – Sorry, but’s false that .Net was designed around and for C#. .Net is designed to be language neutral. Here’s a quote from The Common Language Runtime (CLR) A language-neutral development & execution environment that provides services to help "manage" application execution If you want to argue with Anders Hjelsberg, Brad Abrams, and the rest of the CLR team on this point, please feel free to do so. Obviously, I don’t agree that the Delphi implementation is "poor". It’s different than C#, yes, but of course, that hardly qualifies it as "poor". And my dream to hear a post from you that doesn’t sound like it came fom the Grinch lives on. NickAugust 29th, 2006 at 5:16 pm For a software engineer, learning new stuff is part of the game. This makes "Best Language" not such a big part of the issue, as learning a new language is not such a big deal in the overall picture. A more important question might be which is the most productive IDE when writing for .Net - the answer to that question is determined in part on whether the product crashes multiple times every day.August 29th, 2006 at 5:26 pm as much as I love Delphi I have to be realistic, Delphi is not the best language for .NET, if you had to chose one language right now it would probably have to be C#, Delphi is not a first class citizen in .NET, it is the king of Win32 we all know that, but .NET is a whole different story (&begin, &end anyone?)August 29th, 2006 at 5:42 pm I agree with C# being the first choice. Especially given that Delphi’s support for the latest versions of .NET is (and always will be?) lagging behind C#.August 29th, 2006 at 7:49 pm I put out a post on the aforementioned blog from Nick. On the subject of languages, .NET is not about languages. I have heard it said that with .NET the language wars are over. Now, I personally have a problem with C#. What it boils down to for me is the fact that you take .NET away from C# and it falls flat on its face. I don’t know if this is a Microsoft rule for anyone that wants to write a C# IDE, but that is a problem to me. You take the Delphi Language away from .NET, and you still have the choice of Win16/Win32/Linux. I much, much prefer to choose a language that gives me the flexibilty of platform while not having the intimidation factor that C++ carries for me.August 29th, 2006 at 8:10 pm Agree with Theodore. My company policy to support .Net is also in very low priority. Instead, Un*x platforms are on higher priority, including MacOS. Windows is still very functional without .Net, so why we need it then?August 29th, 2006 at 9:01 pm Theodore -> That is perhaps the most spurious argument I have every heard. If you take anything’s ecosystem away, it’s litterally a fish out of water. To extend that metafore, just because Delphi can tread dotnet waters doesn’t make it the best swimmer. In fact, you could EASILY adapt C# to win32 if you were so inclinded. But just as Delphi’s deterministic destruction goes away in dotNet’s garbage collected environment, C# would have to adopt deterministic destruction, and indeed, it already has a mechanism will suited for that (just move IDisposable down to the base object like Delphi has). The language would be just as suited to Win32 development at that point. The claim that dotNet is not about languages really only holds for integration barriers. You can easily mix 2 well implemented, first class dotNet language products without a problem because of the common library and common type system. Unfortunately, Delphi already has implementation problems (namespace, depending on extra core libraries) AND it isn’t even playing in the same playground anymore - dotNet 2.0 vs dotNet 1.1/1.0. The only thing that simply can’t be argued here is that Delphi is falling further and further behind in the dotNet world. By the time dotNet 2.0 finally gets added, DevCo’s Xaml designer support will be so far behind MS’s support that their litterally will be no real to ever want to touch it again for a dotNet product. This is one of the reasons I saw Delphi having a better chance being sold to MS than anyone else. Sadly, that hope is long since been extinguished and we’ll just have to hope DevCo realizes it sooner and later and diverts all their efforts into creating the best win32 compiler they can rather than wasting resources trying to create the second or third best out of date dotNet compiler. Honestly, Delphi.Net died the moment Microsoft was explicitly excluded from buying it.August 29th, 2006 at 9:24 pm Delphi .NET is marketed as the way to compile the same application to both .NET and Win32. How can anyone expect it can be the best for .NET it it is not designed to be the best for .NET? I wonder why C++ looks funny. In VS 2005, the .NET stuff finally looks like a natural part of the language. All Delphi "preserve your investment" bla-bla holds for C++. It can mix all .NET with all Win32. Maybe a bit too complex for a beginner, but why funny?August 30th, 2006 at 12:05 am C++/CLI is simply awesome, if you are coming from a C++ background and need to mix native and .NET for whatever reason. Very easy to use and very powerful.August 30th, 2006 at 12:17 am Agree with others that state that Delphi is not the best language for .NET. It definitely isn’t. I judge the worth of a language (for .NET) by the type of code it looks like when represented in MSIL. I rate in the order of C#, VB (with strict option ON) and Delphi. C++/CLI generates horrendous MSIL unless compiled with the /clr:safe command IMO. Delphi (the language) has awesome features and with the capability of producing code that compiles natively makes it FOR ME the Best language for .NET.August 30th, 2006 at 12:44 am I agree with Fritz. C++/CLI is great in the latest release. Mixing managed and unmanaged code in this release is friggin’ sweet!August 30th, 2006 at 1:00 am I’d say C#. Delphi.NET is still V1.1 not V2.0, same with C# builder.August 30th, 2006 at 2:11 am Why not C++Builder for JVM? It would be a nice choice.August 30th, 2006 at 4:29 am The best languege is that one wich you’re best on implementing it - or the one you’re most famiiar with. Even its VB.NET, C# or Delphi .NET - Personally i love delphi, since 1996 !August 30th, 2006 at 6:11 am C Johnson, Everywhere I run across you, you do nothing but spout nonsense and spread FUD. If you’re so set against Delphi, why do you still post about it everywhere? Why waste your time (and ours) writing about something you so obviously detest? Do yourself (and the rest of us) a favor and just go away. You’ll be much happier and have so much more time on your hands to use in kissing MS’s butt. KenAugust 30th, 2006 at 8:56 am What do you mean by ‘Best’ ?. The language which: 1. allows you to write the least lines of code to accomplish a task? 2. that a C++ coder can understand the fastest. 3. that results in the fastest executable without profiling and optimisation? 4. that can be easilly deployed on the greatest number of platforms (not talking OS’s here, I mean web, desktop, mobile etc) Or 5. the language that is most accesable and the one you get the feeling you can trust? I think that the answer is that points 1 to 4 SHOULD always be taken into account, but point 5 WILL always be taken into account; with this in mind I think that Borland/Devco needs to be doing much more to make the language accessable. I went into Waterstones in the U.K. today (biggest national bookseller in the U.K.) and there were at least 10 books about .NET platforms all of which included the express version on C.D. some of which were published by Microsoft. True, Devco is going to have a hard time doing this, but how about links from the IDE welcome page that provide sample applications? How about sending boxes of CD’s to universities to hand out at the start of courses, that contain Turbo Delphi .Net. along with tutorials? What about sponsored articles in magazines (’Learn .NET the easy way with Delpi). What concerns me about Devco’s current approach with all the retro stuff on the turbo web site etc, is that they are largely preaching to the converted and maybe trying to be a little too clever and trying to do the equivalent of creating a cult. I am not sure that lots of effort into ‘clever’ advertising and marketing is key with a product that a lot of new developers haven’t even heard of. I think it’s more about (after fixing the performance and stability, actually mainly stability, issues ) high-visibility high-volume advertising and giving people to chance to see for themselves how great Delphi is. I mean think about why Java was so popular so soon after its release? I remember lots of us at University considered it to be the ‘best’ language when we started learning programming; why? platform cross compatibility? I doubt it, it was never a requirement for coursework. Performance? yeah right!. the number of lines of code needed? No, you’d typically need to write more actual code than with Visual C++ or VB. So what made it the best? Basicaly it was the support from Sun, loads of high profile advertising that got you to the download site and some very good introductory documentation when you got there. With Delphi new developers currently need to go looking for a language they may not have heard of, download it, not knowing if it will fit their purposes or not and then go looking for introductory material (there may well not be a single book in their local bookstore). Yes there are introductory videos on the turbo site (good idea), but where does the new developer go from there? If nothing else there should be links to sites that will provide information, not just ‘here’s a bunch of links’ but a real explanation of why they should be going there and which of their objectives they can achieve. Also maybe Devco could talk to people like Marco Cantu etc about reproducing some of their basics material in a cut down Devco branded format for free distribution with reprocal links back to their sites, this would initially make Devco’s support a lot more credible than just web links and ultimately help to promote the real sense of community spirit attached to Delphi.August 30th, 2006 at 11:14 am If you are a developer tasked with choosing a language and want a quick answer to the ‘which .NET language question’ above here goes: Delphi .NET will usually let you move accross from old versions of Delphi without too much trouble (basically it’s often just a recompile unless you are using third party components, in which case you may have problems); C# seems to be becoming the most popular .NET language and IMO gives you the best option of jumping ship to Visual Studio in future if you want to Managed C++ is probably not the way to go unless you want to integrate legacy C++ code, and I bet that Microsoft aren’t as committed to it as some other languages either. VB.NET provides a relatively easy upgrade path from Visual Basic, but expect a learning curve as it’s definately not as seemless as the Delphi to Delphi.NET transition. Also if you know VB but not Delphi, bare in mind that a lot of VB developers think Delphi has quite a familiar feel to it. I guess J# is a possiblity if you are a Java Developer as well, although J# kind of misses the point in that it isn’t cross-platform unless you count Mono, and Mono and .NET compatibility is definately not something to count on.August 30th, 2006 at 11:46 am David: Regarding Microsoft’s commitment to C++. I think that they truly are commited to C++, and managed C++. Just look at some of their blogs, and videos on Channel 9. C++ is the foundation of the operating system, office and a majority of their products. Even though .NET is touted as the great new thing, C++ is still their bread and butter. And of course when you have all that legacy code, C++/CLI is the way to expose it to the .NET world. Sure, a lot of application developers talk about .NET, some go all ga-ga over it. But were they to write something like, say, Windows Media Player today, I still doubt they would do it in something like C#. C++ is their language of choice for their products.August 31st, 2006 at 1:35 am That easy:August 31st, 2006 at 9:33 am 1) C# 2) C++/CLI 3) Delphi 4..n-3) every other .NET language n-2) IL n-1) hex editing of .NET assemblies n) VB Diego Barros, I agree that Microsoft are committed to C++ and do think that if you have legacy C++ code C++/CLI is a good solution. What I said was that I don’t think they are ‘AS’ committed to C++ as some other languages. When you look beyond the technical reasons for the .NET, the business reasons are clearly to tie developers into the Windows platform; obviously C++ can’t be used to do this in the same way that C# and VB.NET can. The whole point of the .NET strategy from Microsofts point of view is to create an ever evolving framework that they control, that puts them ahead of the game and leaves everybody else playing catchup. Yes they have submitted .NET technologies to standards commities, but the commercial and technical realities make it a Windows technology; despite people talking up MONO it isn’t a technology that is ever likely to provide a reliable compatible alternative to .NET. Yes it is possible to produce applications that are cross-compatible between .NET and MONO but it isn’t as clearcut as, for example, Java. This was the idea from the start. Microsoft is safe from anti-trust action, has a defence against the open source arguments against proprietry code but enough doubt will always exist to ensure that its implementation remains dominant. In the short to medium term it has to support C++, to extend its own applications and to support the huge amount of legacy code out there, but it has a very strong interest in trying to move developers away from C++ and onto C#. 1. The Windows lock-in factor 2. Higher CPU requirements support the sale of new computers (and therefore new OS licensing sales) so no they won’t try to write Windows Media Player in .NET today, but it will be as soon as they can get away with it 3. Some actual genuine technical advantages of the technology, not least that it should help to improve security in the long term.August 31st, 2006 at 10:00 am Interesting that nobody mentioned Chrome so far. It’s basically my main dev language for quite some time, and I really like that they made out of Pascal. Kind of Pascal on steroids. Whatever C# can do, Chrome can, too. And then some. Ever worked with explicitly implemented interfaces in C#? Freakin’ horrible! Though, GUI stuff would be better done in C# as there is better tools support.September 3rd, 2006 at 11:31 am Fortunately, I don’t do GUI stuff. I think it’s well understood that F# is the best .NET language. 3rd, 2006 at 4:06 pm Someone said they were surprised that MS did not buy Delphi. They did: they bought Anders Hjelsberg.September 3rd, 2006 at 9:23 pm I use Chrome as well for all my development.September 4th, 2006 at 2:13 am I also use it for GUI apps… I did gave D.net a try 2 years ago, but the many hacks to get it compatible with D32 were to much for me… for example Strings are "1"-based in D.net but all MS api functions are zero based…. very messy. for .Net dev Chrome any time… (generics etc…) who in the earth uses Borland Products forSeptember 8th, 2006 at 12:50 pm delveloing .NET application……………………………………………………………………………………………………………………………………………………………………………………..
http://blogs.embarcadero.com/nickhodges/2006/08/29/27110
crawl-002
refinedweb
3,407
73.47
First of all, you might be asking, what is SEO? Well, "Search Engine Optimization" is about making small modifications to parts of your website that will make it easier for search engines to crawl, index and understand your content. Here, I'm going to show a quick guide on how to implement the SEO basics in your Django project. Meta tags Meta tags are a way to provide search engines with information about your site. These tags must be placed within thetag in the HTML document. Description <meta name="description" content="A good summary about your page."> This meta tag is the most important, it provides a short description about each page of your website. Search engines, such as Google, will often use them as snnipets of your pages. Good Pratices - Write unique description for each page - One to two sentences or one short paragraph - Write a good summary of what your page is about Bad Pratices - Use generic descriptions - Fill the description with only keywords - Use the same description across all pages of your website Title <title>Django REST Framework 3.1 -- Classy DRF</title> When your page appears in a search result, this tag will usually be in the first line. Good Pratices - Brief and descriptive title - Unique title tag for each page Bad Pratices - Using a single title to all pages of your website - Use generic titles How to do it with Django? Well, there are many ways to include meta tags in your django project. - The simplest way is to add them directly in your template's pages, but I strongly do not recommend doing this for obvious reasons. - Another way is to pass the meta tag of each view to their respective view through the context_data. - If you have a really big project, with hundreds or thousands pages, you can also try the [][Django-SEO] Framework, but from the date this post was written, it was not ready for Django 1.7. Don't waste your time thinking about keywords meta tags, major search engines don't use it anymore. It's a XML file where you can list the pages of website to tell search engines about your website organization. Search engine web crawlers read this file to more intelligently crawl your site. It look like this: <urlset xmlns=""> <url> <loc></loc> <lastmod>2005-01-01</lastmod> <changefreq>monthly</changefreq> <priority>0.8</priority> </url> <url> <loc></loc> <lastmod>2004-12-23</lastmod> <changefreq>weekly</changefreq> </url> </urlset> How to do it with Django? Django comes with a sitemap framework that makes this task pretty easy. - Add django.contrib.sitemaps in your installed apps. INSTALLED_APPS = ( # ... 'django.contrib.sitemaps', ) - Make sure your TEMPLATES setting contains a DjangoTemplates backend and APP_DIRS options is set to True. TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, # ... },] - Create a sitemap.py file with your sitemap class: class MySiteSitemap(Sitemap): changfreq = 'always' def items(self): return Question.objects.all() def lastmod(self, item): last_answer = Answer.objects.filter(question=item) if last_answer: return sorted(last_answer)[-1].date items is a method that should return an object list of your sitemap. lastmod can be a method or an attribute, when it's a method it should take an object returned by items() and return this object's last-modified date, as Python datetime.datetime object. location can be a method or attribute, when a method it should return the absolute path given a object returned by items(). When it's not specified the framework will call get_absolute_url() on each object returned by items(). See others attributes and methods at Django Sitemap Framework documentation. - Set your urls.py from django.contrib.sitemaps.views import sitemap from entry.sitemaps import EntrySitemap urlpatterns = patterns('', # ... # SEO url(r'^sitemap\.xml$', sitemap, {'sitemaps': {'entry': EntrySitemap}}, name='django.contrib.sitemaps.views.sitemap'), ) - Access your sitemap at So, these are simple tips that will improve your search engine performance, you can learn other techniques in these links below:
https://www.vinta.com.br/blog/2015/basic-seo-django/
CC-MAIN-2018-43
refinedweb
661
64.91
Important: Please read the Qt Code of Conduct - Qt creator fails to create header files for UI's (Dialogs) Hi, In my Desktop Application, I have created a dialog UI through the IDE (derived from QDialog). DlgLogin, derived from QDialog I can find the UI, .cpp and .h added to the project file (below). SOURCES += main.cpp\ mainwindow.cpp \ dlglogin.cpp HEADERS += mainwindow.h \ dlglogin.h FORMS += mainwindow.ui \ dlglogin.ui When I compile, there is no issue. But when I refer the dialog file like the below, the program doesn't run. #include "mainwindow.h" #include "ui_mainwindow.h" #include <QMessageBox> #include "dlglogin.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { DlgLogin login; login.setModal(true); login.show(); ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } it gives the below errors. *mainwindow.obj:-1: error: LNK2019: unresolved external symbol "public: __cdecl DlgLogin::DlgLogin(class QWidget *)" (??0DlgLogin@@QEAA@PEAVQWidget@@@Z) referenced in function "public: __cdecl MainWindow::MainWindow(class QWidget *)" (??0MainWindow@@QEAA@PEAVQWidget@@@Z) mainwindow.obj:-1: error: LNK2019: unresolved external symbol "public: virtual __cdecl DlgLogin::~DlgLogin(void)" (??1DlgLogin@@UEAA@XZ) referenced in function "public: __cdecl MainWindow::MainWindow(class QWidget )" (??0MainWindow@@QEAA@PEAVQWidget@@@Z) debug\Akounts.exe:-1: error: LNK1120: 2 unresolved externals When I add include "ui_dlglogin.h" in the header, it gives the below error message. XXX\mainwindow.cpp:3: error: C1083:Cannot open include file: 'ui_dlglogin.h': No such file or directory My build directory is outside the project folder and I cannot find ui_dlglogin.h file inside. I can understand that the UI compiler is not able to generate the source code and hence the linker is throwing this error message. How do I resolve this? I have gone through the responses in most of the forums and it didn't work. I am currently working on Windows 10 machine with Qt Creator 4.2.1, MSVC 2015 32-Bit compiler (my machine is 64 bit and I am using 64 bit Qt) - dheerendra Qt Champions 2017 last edited by Can you check which directory the ui_dlglogin.h file exist ? Does it exist in some directory ? - mrjj Lifetime Qt Champion last edited by Can you show your constructor of the dialog It looks for DlgLogin::DlgLogin(class QWidget *) So does the destructor have default == NULL ? @dheerendra I didn't find ui_dlglogin.h anywhere. The build directory is outside the project directory and I didn't find it there. However, I can find ui_mainwindow.h there Hi, Sometimes if you make important changes to a project it is a good thing to uses 'Clean' + 'Qmake' + rebuild. This might help the linker finding what it needs. Eddy @mrjj my dlglogin.h source code #ifndef DLGLOGIN_H #define DLGLOGIN_H #include <QDialog> namespace Ui { class DlgLogin; } class DlgLogin : public QDialog { Q_OBJECT public: explicit DlgLogin(QWidget *parent = 0); ~DlgLogin(); private: Ui::DlgLogin *ui; }; #endif // DLGLOGIN_H my dlglogin.cpp source code #include "dlglogin.h" #include "ui_dlglogin.h" DlgLogin::DlgLogin(QWidget *parent) : QDialog(parent), ui(new Ui::DlgLogin) { ui->setupUi(this); } DlgLogin::~DlgLogin() { delete ui; } Glad you can move on! We all have been in the same situation like you had just now. ;-) Every time I see linker errors like this I clean my project as described. Could you please mark the topic as solved. - Pradeep Kumar last edited by u have marked as solved. cheers. Thanks,
https://forum.qt.io/topic/79605/qt-creator-fails-to-create-header-files-for-ui-s-dialogs/10
CC-MAIN-2020-34
refinedweb
553
51.65
Hi folks I am struggling to pass the value of a variable, to be used as an argument for a number of sequentially called python scripts, themselves called from a bash script. Apologies for the pathname chaos. Phase 1: reads the case names from a text file, line by line, saving the name to $case #Determine how many cases were run cd /Users/stuartmuller/Sort/PhD/'Analytical testing' j=$(wc -l 'cases.txt' | awk '{print $1'}) IFS=$'\n' #Copy AN output files for each case from VM to HOST and process for line in $(cat cases.txt) do case=$line #Copt file from VM to HOST machine /Library/'Application Support'/'VMware Fusion'/vmrun -T fusion -gu 'Stuart Muller' -gp gandalf copyFileFromGuestToHost /Users/stuartmuller/Documents/'Virtual Machines.localized'/'Windows 7.vmwarevm'/'Windows 7.vmx' 'C:\Program Files\UCR\STANMOD\Projects\3DADE\AN'$case'\CXTFIT.OUT' $scripting'/src/Stu/CXTFIT_'$case'.OUT' bash StuANoutputPrep.sh $case done Phase 2: Call python script #Read in argument and save to case2 echo case2 = $1 cd $scripting/src/stu python StuDeleteMultipleLines.py $case2 Phase 3: Run python script using the value of $case2 import sys case3 = sys.argv[1] print case3 The printed output for case3 is "sys.argv[1]" - this is not the value of the variable given as the argument (case3=case2=case='some text'), not even the name of the variable as given in the argument (case2 or $case2) Clearly I am new to this and bumbling me way around in the dark, so any assistance would be awesome :-)
https://www.daniweb.com/programming/software-development/threads/256459/pass-value-of-bash-variable-to-python-script
CC-MAIN-2018-13
refinedweb
257
60.35
.stringString handling functions. Objects of types string, wstring, and dstring are value types and cannot be mutated element-by-element. For using mutation during building strings, use char[], wchar[], or dchar[]. The *string types are preferable because they don't exhibit undesired aliasing, thus making code more robust. License: Boost License 1.0. Authors: Walter Bright, Andrei Alexandrescu, and Jonathan M Davis Source: std/string.d - class StringException: object.Exception; - Exception thrown on errors in std.string functions. - int icmp(alias pred = "a < b", S1, S2)(S1 s1, S2 s2); - Compares two ranges of characters lexicographically. The comparison is case insensitive. Use std.algorithm.cmp for a case sensitive comparison. icmp works like std.algorithm.cmp except that it converts characters to lowercase prior to applying pred. Technically, icmp(r1, r2) is equivalent to cmp!"std.uni.toLower(a) < std.uni.toLower(b)"(r1, r2). - pure nothrow immutable(char)* toStringz(const(char)[] s);. - enum CaseSensitive; - Flag indicating whether a search is case-sensitive. - pure ptrdiff_t indexOf(Char)(in Char[] s, dchar c, CaseSensitive cs = CaseSensitive.yes); - Returns the index of the first occurence of c in s. If c is not found, then -1 is returned. cs indicates whether the comparisons are case sensitive. - ptrdiff_t indexOf(Char1, Char2)(const(Char1)[] s, const(Char2)[] sub, CaseSensitive cs = CaseSensitive.yes); - Returns the index of the first occurence of sub in s. If sub is not found, then -1 is returned. cs indicates whether the comparisons are case sensitive. - ptrdiff_t lastIndexOf(Char)(const(Char)[] s, dchar c, CaseSensitive cs = CaseSensitive.yes); - Returns the index of the last occurence of c in s. If c is not found, then -1 is returned. cs indicates whether the comparisons are case sensitive. - ptrdiff_t lastIndexOf(Char1, Char2)(const(Char1)[] s, const(Char2)[] sub, CaseSensitive cs = CaseSensitive.yes); - Returns the index of the last occurence of sub in s. If sub is not found, then -1 is returned. cs indicates whether the comparisons are case sensitive. - pure nothrow auto representation(Char)(Char[] s); - Returns the representation of a string, which has the same type as the string except the character type is replaced by ubyte, ushort, or uint depending on the character width. Example: string s = "hello"; static assert(is(typeof(representation(s)) == immutable(ubyte)[])); assert(representation(s) is cast(immutable(ubyte)[]) s); assert(representation(s) == [0x68, 0x65, 0x6c, 0x6c, 0x6f]); - pure @trusted S toLower(S)(S s); - Returns a string which is identical to s except that all of its characters are lowercase (in unicode, not just ASCII). If s does not have any uppercase characters, then s is returned. - void toLowerInPlace(C)(ref C[] s); - Converts s to lowercase (in unicode, not just ASCII) in place. If s does not have any uppercase characters, then s is unaltered. - pure @trusted S toUpper(S)(S s); - Returns a string which is identical to s except that all of its characters are uppercase (in unicode, not just ASCII). If s does not have any lowercase characters, then s is returned. - void toUpperInPlace(C)(ref C[] s); - Converts s to uppercase (in unicode, not just ASCII) in place. If s does not have any lowercase characters, then s is unaltered. - pure @trusted S capitalize(S)(S s); - Capitalize the first character of s and conver the rest of s to lowercase. - enum KeepTerminator; S[] splitLines(S)(S s, KeepTerminator keepTerm = KeepTerminator.no); - Split s into an array of lines using '\r', '\n', "\r\n", std.uni.lineSep, and std.uni.paraSep as delimiters. If keepTerm is set to KeepTerminator.yes, then the delimiter is included in the strings returned. - pure @safe C[] stripLeft(C)(C[] str); - Strips leading whitespace. Examples: assert(stripLeft(" hello world ") == "hello world "); assert(stripLeft("\n\t\v\rhello world\n\t\v\r") == "hello world\n\t\v\r"); assert(stripLeft("hello world") == "hello world"); assert(stripLeft([lineSep] ~ "hello world" ~ lineSep) == "hello world" ~ [lineSep]); assert(stripLeft([paraSep] ~ "hello world" ~ paraSep) == "hello world" ~ [paraSep]); - C[] stripRight(C)(C[] str); - Strips trailing whitespace. Examples: assert(stripRight(" hello world ") == " hello world"); assert(stripRight("\n\t\v\rhello world\n\t\v\r") == "\n\t\v\rhello world"); assert(stripRight("hello world") == "hello world"); assert(stripRight([lineSep] ~ "hello world" ~ lineSep) == [lineSep] ~ "hello world"); assert(stripRight([paraSep] ~ "hello world" ~ paraSep) == [paraSep] ~ "hello world"); - C[] strip(C)(C[] str); - Strips both leading and trailing whitespace. Examples: assert(strip(" hello world ") == "hello world"); assert(strip("\n\t\v\rhello world\n\t\v\r") == "hello world"); assert(strip("hello world") == "hello world"); assert(strip([lineSep] ~ "hello world" ~ [lineSep]) == "hello world"); assert(strip([paraSep] ~ "hello world" ~ [paraSep]) == "hello world"); - C[] chomp(C)(C[] str); C1[] chomp(C1, C2)(C1[] str, const(C2)[] delimiter); - If str ends with delimiter, then str is returned without delimiter on its end. If it str does not end with delimiter, then it is returned unchanged. If no delimiter is given, then one trailing '\r', '\n', "\r\n", std.uni.lineSep, or std.uni.paraSep is removed from the end of str. If str does not end with any of those characters, then it is returned unchanged. Examples: assert(chomp(" hello world \n\r") == " hello world \n"); assert(chomp(" hello world \r\n") == " hello world "); assert(chomp(" hello world \n\n") == " hello world \n"); assert(chomp(" hello world \n\n ") == " hello world \n\n "); assert(chomp(" hello world \n\n" ~ [lineSep]) == " hello world \n\n"); assert(chomp(" hello world \n\n" ~ [paraSep]) == " hello world \n\n"); assert(chomp(" hello world") == " hello world"); assert(chomp("") == ""); assert(chomp(" hello world", "orld") == " hello w"); assert(chomp(" hello world", " he") == " hello world"); assert(chomp("", "hello") == ""); - C1[] chompPrefix(C1, C2)(C1[] str, C2[] delimiter); - If str starts with delimiter, then the part of str following delimiter is returned. If it str does not start with delimiter, then it is returned unchanged. Examples: assert(chompPrefix("hello world", "he") == "llo world"); assert(chompPrefix("hello world", "hello w") == "orld"); assert(chompPrefix("hello world", " world") == "hello world"); assert(chompPrefix("", "hello") == ""); - S chop(S)(S str); - Returns str without its last character, if there is one. If str ends with "\r\n", then both are removed. If str is empty, then then it is returned unchanged. Examples: assert(chop("hello world") == "hello worl"); assert(chop("hello world\n") == "hello world"); assert(chop("hello world\r") == "hello world"); assert(chop("hello world\n\r") == "hello world\n"); assert(chop("hello world\r\n") == "hello world"); assert(chop("Walter Bright") == "Walter Brigh"); assert(chop("") == ""); - @trusted S leftJustify(S)(S s, size_t width, dchar fillChar = ' '); - Left justify s in a field width characters wide. fillChar is the character that will be used to fill up the space in the field that s doesn't fill. - @trusted S rightJustify(S)(S s, size_t width, dchar fillChar = ' '); - Right justify s in a field width characters wide. fillChar is the character that will be used to fill up the space in the field that s doesn't fill. - @trusted S center(S)(S s, size_t width, dchar fillChar = ' '); - Center s in a field width characters wide. fillChar is the character that will be used to fill up the space in the field that s doesn't fill. - pure @trusted S detab(S)(S s, size_t tabSize = 8); - Replace each tab character in s with the number of spaces necessary to align the following character at the next tab stop where tabSize is the distance between tab stops. - pure @trusted S entab(S)(S s, size_t tabSize = 8); - Replaces spaces in s with the optimal number of tabs. All spaces and tabs at the end of a line are removed. Parameters: - @safe C1[] translate(C1, C2 = immutable(char))(C1[] str, dchar[dchar] transTable, const(C2)[] toRemove = null); @safe C1[] translate(C1, S, C2 = immutable(char))(C1[] str, S[dchar] transTable, const(C2)[] toRemove = null); - Replaces the characters in str which are keys in transTable with their corresponding values in transTable. transTable is an AA where its keys are dchar and its values are either dchar or some type of string. Also, if toRemove is given, the characters in it are removed from str prior to translation. str itself is unaltered. A copy with the changes is returned. See Also: tr std.array.replace Parameters: Examples: dchar[dchar] transTable1 = ['e' : '5', 'o' : '7', '5': 'q']; assert(translate("hello world", transTable1) == "h5ll7 w7rld"); assert(translate("hello world", transTable1, "low") == "h5 rd"); string[dchar] transTable2 = ['e' : "5", 'o' : "orange"]; assert(translate("hello world", transTable2) == "h5llorange worangerld"); - nothrow @trusted C[] translate(C = immutable(char))(in char[] str, in char[] transTable, in char[] toRemove = null); pure nothrow @trusted string makeTrans(in char[] from, in char[] to); - This is an ASCII-only overload of translate. It will not work with Unicode. It exists as an optimization for the cases where Unicode processing is not necessary. Unlike the other overloads of translate, this one does not take an AA. Rather, it takes a string generated by makeTrans. The array generated by makeTrans is 256 elements long such that the index is equal to the ASCII character being replaced and the value is equal to the character that it's being replaced with. Note that translate does not decode any of the characters, so you can actually pass it Extended ASCII characters if you want to (ASCII only actually uses 128 characters), but be warned that Extended ASCII characters are not valid Unicode and therefore will result in a UTFException being thrown from most other Phobos functions. Also, because no decoding occurs, it is possible to use this overload to translate ASCII characters within a proper UTF-8 string without altering the other, non-ASCII characters. It's replacing any code unit greater than 127 with another code unit or replacing any code unit with another code unit greater than 127 which will cause UTF validation issues. See Also: tr std.array.replace Parameters: Examples: auto transTable1 = makeTrans("eo5", "57q"); assert(translate("hello world", transTable1) == "h5ll7 w7rld"); assert(translate("hello world", transTable1, "low") == "h5 rd"); - string format(Char, Args...)(in Char[] fmt, Args args); - Format arguments into a string. format's current implementation has been replaced with xformat's implementation. in November 2012. This is seamless for most code, but it makes it so that the only argument that can be a format string is the first one, so any code which used multiple format strings has broken. Please change your calls to format accordingly. e.g.: format("key = %s", key, ", value = %s", value)needs to be rewritten as: format("key = %s, value = %s", key, value) - char[] sformat(Char, Args...)(char[] buf, in Char[] fmt, Args args); - Format arguments into string s which must be large enough to hold the result. Throws RangeError if it is not. Returns: s sformat's current implementation has been replaced with xsformat's implementation. in November 2012. This is seamless for most code, but it makes it so that the only argument that can be a format string is the first one, so any code which used multiple format strings has broken. Please change your calls to sformat accordingly. e.g.: sformat(buf, "key = %s", key, ", value = %s", value)needs to be rewritten as: sformat(buf, "key = %s, value = %s", key, value) - string xformat(Char, Args...)(in Char[] fmt, Args args); - Format arguments into a string. format has been changed to use this implementation in November 2012. Then xformat has been scheduled for deprecation at the same time. It will be deprecateed in May 2013. - char[] xsformat(Char, Args...)(char[] buf, in Char[] fmt, Args args); - Format arguments into string buf which must be large enough to hold the result. Throws RangeError if it is not. sformat has been changed to use this implementation in November 2012. Then xsformat has been scheduled for deprecation at the same time. It will be deprecateed in May 2013. Returns: filled slice of buf - bool inPattern(S)(dchar c, in S. - bool inPattern(S)(dchar c, S[] patterns); - See if character c is in the intersection of the patterns. - size_t countchars(S, S1)(S s, in S1 pattern); - Count characters in s that match pattern. - S removechars(S)(S s, in S pattern); - Return string that is s with all characters removed that match pattern. - S squeeze(S)(S s, in S pattern = null); - Return string where sequences of a character in s[] from pattern[] are replaced with a single instance of that character. If pattern is null, it defaults to all characters. - S1 munch(S1, S2)(ref S1 s, S2 pattern); - Finds the position pos of the first character in s that does not match pattern (in the terminology used by inPattern). Updates s = s[pos..$]. Returns the slice from the beginning of the original (before update) string up to, and excluding, pos. Example: string s = "123abc"; string t = munch(s, "0123456789"); assert(t == "123" && s == "abc"); t = munch(s, "0123456789"); assert(t == "" && s == "abc");The munch function is mostly convenient for skipping certain category of characters (e.g. whitespace) when parsing strings. (In such cases, the return value is not used.) - S succ(S)(S s); - Return string that is the 'successor' to s[]. If the rightmost character is a-zA-Z0-9, it is incremented within its case or digits. If it generates a carry, the process is repeated with the one to its immediate left. - C1[] tr(C1, C2, C3, C4 = immutable(char))(C1[] str, const(C2)[] from, const(C3)[] to, const(C4)[] modifiers = null); - Replaces the characters in str which are in from with the the corresponding characters in to and returns the resulting string. tr is based on Posix's tr, though it doesn't do everything that the Posix utility does. Parameters: Modifiers: If the modifier 'd' is present, then the number of characters in to may be only 0 or 1. If the modifier 'd' is not present, and to is empty, then to is taken to be the same as from. If the modifier 'd' is not present, and to is shorter than from, then to is extended by replicating the last charcter in to. Both from and to may contain ranges using the '-' character (e.g. "a-d" is synonymous with "abcd.) Neither accept a leading '^' as meaning the complement of the string (use the 'c' modifier for that). - bool isNumeric(const(char)[] s, in bool bAllowSep = false); - [in] string. - char[] soundex(const. Parameters:. - string[string] abbrev(string[] string[] list = [ "food", "foxy" ]; auto abbrevs = std.string.abbrev(list); foreach (key, value; abbrevs) { writefln("%s => %s", key, value); } }produces the output: fox => foxy food => food foxy => foxy foo => food - size_t column(S)(S str, size_t tabsize = 8); - Compute column number after string if string starts in the leftmost column, which is numbered starting from 0. - S wrap(S)(S s, size_t columns = 80, S firstindent = null, S indent = null, size. Parameters: Returns: The resulting paragraph. - S outdent(S)(S str); S[] outdent(S)(S[] lines); - Removes indentation from a multi-line string or an array of single-line strings. This uniformly outdents the text as much as possible. Whitespace-only lines are always converted to blank lines. A StringException will be thrown if inconsistent indentation prevents the input from being outdented. Works at compile-time. Example: writeln(q{ import std.stdio; void main() { writeln("Hello"); } }.outdent());Output: import std.stdio; void main() { writeln("Hello"); }
http://dlang.org/std_string.html
CC-MAIN-2013-20
refinedweb
2,551
56.25
Export Scala.js APIs to JavaScript By default, Scala.js classes, objects, methods and properties are not available to JavaScript. Entities that have to be accessed from JavaScript must be annotated explicitly as exported, using @JSExportTopLevel and @JSExport. A simple example package example import scala.scalajs.js.annotation._ @JSExportTopLevel("HelloWorld") object HelloWorld { @JSExport def sayHello(): Unit = { println("Hello world!") } } This allows to call the sayHello() method of HelloWorld like this in JavaScript: HelloWorld.sayHello(); The @JSExportTopLevel on HelloWorld exports the object HelloWorld itself in the JavaScript global scope. It is however not sufficient to allow JavaScript to call methods of HelloWorld. This is why we also have to export the method sayHello() with @JSExport. In general, things that should be exported on the top-level, such as top-level objects and classes, are exported with @JSExportTopLevel, while things that should be exported as properties or methods in JavaScript are exported with @JSExport. Exporting top-level objects Put on a top-level object, the @JSExportTopLevel annotation exports that object to the JavaScript global scope. The name under which it is to be exported must be specified as an argument to @JSExportTopLevel. @JSExportTopLevel("HelloWorld") object HelloWorld { ... } exports the HelloWorld object in JavaScript. Exporting classes The @JSExportTopLevel annotation can also be used to export Scala.js classes to JavaScript (but not traits), or, to be more precise, their constructors. This allows JavaScript code to create instances of the class. @JSExportTopLevel("Foo") class Foo(val x: Int) { override def toString(): String = s"Foo($x)" } exposes Foo as a constructor function to JavaScript: var foo = new Foo(3); console.log(foo.toString()); will log the string "Foo(3)" to the console. This particular example works because it calls toString(), which is always exported to JavaScript. Other methods must be exported explicitly as shown in the next section. Exports with modules When emitting a module for Scala.js code, top-level exports are not sent to the JavaScript global scope. Instead, they are genuinely exported from the module. In that case, an @JSExportTopLevel annotation has the semantics of an ECMAScript 2015 export. For example: @JSExportTopLevel("Bar") class Foo(val x: Int) is semantically equivalent to this JavaScript export: export { Foo as Bar }; Exporting methods Similarly to objects, methods of Scala classes, traits and objects can be exported with @JSExport. Unlike for @JSExportTopLevel, the name argument is optional for @JSExport, and defaults to the Scala name of the method. class Foo(val x: Int) { @JSExport def square(): Int = x*x // note the (), omitting them has a different behavior @JSExport("foobar") def add(y: Int): Int = x+y } Given this definition, and some variable foo holding an instance of Foo, you can call: console.log(foo.square()); console.log(foo.foobar(5)); // console.log(foo.add(3)); // TypeError, add is not a member of foo Overloading Several methods can be exported with the same JavaScript name (either because they have the same name in Scala, or because they have the same explicit JavaScript name as parameter of @JSExport). In that case, run-time overload resolution will decide which method to call depending on the number and run-time types of arguments passed to the the method. For example, given these definitions: class Foo(val x: Int) { @JSExport def foobar(): Int = x @JSExport def foobar(y: Int): Int = x+y @JSExport("foobar") def bar(b: Boolean): Int = if (b) 0 else x } the following calls will dispatch to each of the three methods: console.log(foo.foobar()); console.log(foo.foobar(5)); console.log(foo.foobar(false)); If the Scala.js compiler cannot produce a dispatching code capable of reliably disambiguating overloads, it will issue a compile error (with a somewhat cryptic message): class Foo(val x: Int) { @JSExport def foobar(): Int = x @JSExport def foobar(y: Int): Int = x+y @JSExport("foobar") def bar(i: Int): Int = if (i == 0) 0 else x } gives: [error] HelloWorld.scala:16: double definition: [error] method $js$exported$meth$foobar:(i: Int)Any and [error] method $js$exported$meth$foobar:(y: Int)Any at line 14 [error] have same type [error] @JSExport("foobar") [error] ^ [error] one error found Hint to recognize this error: the methods are named $js$exported$meth$ followed by the JavaScript export name. Exporting top-level methods While an @JSExported method inside an @JSExportTopLevel object allows JavaScript code to call a “static” method, it does not feel like a top-level function from JavaScript’s point of view. @JSExportTopLevel can also be used directory on a method of a top-level object, which exports the method as a truly top-level function: object A { @JSExportTopLevel("foo") def foo(x: Int): Int = x + 1 } can be called from JavaScript as: const y = foo(5); Exporting properties vals, vars and defs without parentheses, as well as defs whose name ends with _=, have a single argument and Unit result type, are exported to JavaScript as properties with getters and/or setters using, again, the @JSExport annotation. Given this weird definition of a halfway mutable point: @JSExport class Point(_x: Double, _y: Double) { @JSExport val x: Double = _x @JSExport var y: Double = _y @JSExport def abs: Double = Math.sqrt(x*x + y*y) @JSExport def sum: Double = x + y @JSExport def sum_=(v: Double): Unit = y = v - x } JavaScript code can use the properties as follows: var point = new Point(4, 10) console.log(point.x); // 4 console.log(point.y); // 10 point.y = 20; console.log(point.y); // 20 point.x = 1; // does nothing, thanks JS semantics console.log(point.x); // still 4 console.log(point.abs); // 20.396078054371138 console.log(point.sum); // 24 point.sum = 30; console.log(point.sum); // 30 console.log(point.y); // 26 As usual, explicit names can be given to @JSExport. For def setters, the JS name must be specified without the trailing _=. def setters must have a result type of Unit and exactly one parameter. Note that several def setters with different types for their argument can be exported under a single, overloaded JavaScript name. In case you overload properties in a way the compiler cannot disambiguate, the methods in the error messages will be prefixed by $js$exported$prop$. Export fields directly declared in constructors You can export fields directly declared in constructors by annotating the constructor argument: class Point( @JSExport val x: Double, @JSExport val y: Double) // Also applies to case classes case class Point( @JSExport x: Double, @JSExport y: Double) Export fields to the top level Similarly to methods, fields ( vals and vars) of top-level objects can be exported as top-level variables using @JSExportTopLevel: object Foo { @JSExportTopLevel("bar") val bar = 42 @JSExportTopLevel("foobar") var foobar = "hello" } exports bar and foobar to the top-level, so that they can be used from JavaScript as console.log(bar); // 42 console.log(foobar); // "hello" Note that for vars, the JavaScript binding is read-only, i.e., JavaScript code cannot assign a new value to an exported var. However, if Scala.js code sets Foo.foobar, the new value will be visible from JavaScript. This is consistent with exporting a let binding in ECMAScript 2015 modules. Automatically export all members Instead of writing @JSExport on every member of a class or object, you may use the @JSExportAll annotation. It is equivalent to adding @JSExport on every public (term) member directly declared in the class/object: class A { def mul(x: Int, y: Int): Int = x * y } @JSExportAll class B(val a: Int) extends A { def sum(x: Int, y: Int): Int = x + y } This is strictly equivalent to writing: class A { def mul(x: Int, y: Int): Int = x * y } class B(@(JSExport @field) val a: Int) extends A { @JSExport def sum(x: Int, y: Int): Int = x + y } It is important to note that this does not export inherited members. If you wish to do so, you’ll have to override them explicitly: class A { def mul(x: Int, y: Int): Int = x * y } @JSExportAll class B(val a: Int) extends A { override def mul(x: Int, y: Int): Int = super.mul(x,y) def sum(x: Int, y: Int): Int = x + y }
http://www.scala-js.org/doc/interoperability/export-to-javascript.html
CC-MAIN-2021-10
refinedweb
1,352
52.49
Type: Posts; User: raju2001006 Thanks a lot for your comments. As I hardly have info on "double-hop issue" - let me dig a bit and I surely will keep this post updated. Or, if you post any basic info on this, it would be really... I have a VBScript using which I was trying to connect to a server and try to run an EXE - this executable internally is connecting to a SQL Server. When I run this from my local computer it failed... these are really good examples... thanks... Technically speaking, s[3][2] is not illegal - it's just not good practice to use such invalid array counters. You can actually assign something in s[3][2]; - but the Stack around s may get... Yes, exactly. The code will not get compiled. Now, if you define 'r' and 'c' outside the first for loop, at this point it shouldn't show any specific values - cout << s[r][c] ; coz, at this... If you still don't find any answers, I think, it's the time you can start debugging your code - debug it line by line and check every data against every variable to make sure this is what exactly you... That's really a good way to do - "How can I set up my class so it won't be inherited from?" Well, I also never tried in this way, but what I suspect is that - - because of the virtual private... I think the issue here is with the - fin.open(donefile.c_str(), ifstream::in); line. According to your code, you have assigned the first argument as - tfile.assign(argv[1]); So, tfile contains... Yes, you are right - gcc/g++ -fdump-class-hierarchy source_file.cpp As you just want to delete the last row, you can just point your last Row to the Second Last Row - as you have done in your code - // (2) remove the last row StringGrid1->RowCount =... Yes, we can take data alignment as our source of issue here. But, if we look at the points Rajesh1978 has mentioned here - - it's a linux g++ compiler - it's 32bit compiler linux g++ compiler... As I was looking into your code, I have found this line bit strange - circles = (int)( *angle / ( 2 * PI ) ); As I was debugging the data comes as - radians = temp * ( 45.0 * PI / 180.0 );... It was just an example, you could have modified it according to your need; anyway, I have deleted some part of code to deal with the exact requirement you have here - "I took only the first... This is your homework - so why don't you do it yourself? Anyway, just some hints for you to carry on (Q 2) - Use - #include <iostream> #include <fstream> sample code to read files - Yes you can do that using similar code to this - int k=0; for (int rowint=0;rowint<StringGrid1->RowCount;rowint++) { for (int colint=0;colint<StringGrid1->ColCount ;colint++) {...
http://forums.codeguru.com/search.php?s=6bffeebfde8e937a9c9e14f297864c4f&searchid=5799311
CC-MAIN-2014-52
refinedweb
491
70.84
RIF PRD: Presentation syntax issues Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: · Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. · Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. · Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE ::= PNAME_LN | PNAME_NS PNAME_LN ::= PNAME_NS PN_LOCAL PNAME_NS ::= PN_PREFIX? ':' PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS ::= PN_CHARS_U | '-' | [0-9] | #x00B7 | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U ::= PN_CHARS_BASE | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6] | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x0] PN_PREFIX ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification. In any case, RIF does not allow local names without a prefix. · Missing ‘External’ specifiers for built-in functions and predicates. The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. · The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. · Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant. All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” · The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar. · The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’). One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. · I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. · The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars. I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive. It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me. If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document( Prefix( ex1 <> ) (*]) (?voucher ?customer[ex1:voucher->?voucher]) Retract(?customer[ex1:voucher->?voucher]) Retract("])))) ) ) I hope that helps someone out there :-) Share This Post: Short Url: posted on Wednesday, February 9, 2011 9:26 PM Feedback # re: RIF PRD: Presentation syntax issues 4/19/2012 4:56 PM Charles Young Revisiting this many onths later duriing development of LINQ to Rules, I can say it helped...me! Title: Name: Comment: Verification: Skin design by Mark Wagner , Adapted by David Vidmar
http://geekswithblogs.net/cyoung/archive/2011/02/09/rif-prd-presentation-syntax-issues.aspx
CC-MAIN-2015-06
refinedweb
1,235
55.24
Hi. I am trying to make a caesar cypher. For those that don't know what it is: It takes a string and an integer and moves each letter in the string along by that integer. So say the string was ABCD and the shift was 1, then the new message would be BCDE. Ya dig? Anyway, Ive almost got it complete, but there are a few bugs which I can't seem to track down: Caesar.c: Running:Running:Code: #include <stdio.h> #include <string.h> #include <stdlib.h> char shift(char *str, char c, int shiftnum, int length); int main(void) { char stringtoshift[100]; char shiftvalue[10]; char *s = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; int len = strlen(s); /* = 62 */ printf("Enter string: "); fgets(stringtoshift, 100, stdin); int length = strlen(stringtoshift); printf("Now enter the shift value: "); fgets(shiftvalue,10,stdin); int sh = atoi(shiftvalue)% len; if(sh<0) sh += strlen(s); /* if shift is -1 for eg, its the same as shift being 61 */ int j; for(j=0;j<length;j++) { if(stringtoshift[j] == ' '); /*skip spaces*/ else stringtoshift[j] = shift(s,stringtoshift[j],sh,len); printf("%c", stringtoshift[j]); } stringtoshift[j]='\0'; printf("\n"); return 0; } char shift(char *str, char c, int shiftnum, int length) { int i; for(i=0; *str!=c; str++) i++; /* i is the position we are currently in the string str */ if(i+shiftnum >= length) { /* if you are on Y (position 51) for example and shift is 23, then it will spill over (a-z = 26, A-Z = 26, 0-9 = 10, total = 62) (62-51=11, 23-11=12) so we have to move 12 places from 'a' */ shiftnum += i - length; /* 23 - (62-51) = 23 + 51 - 62 */ for(;*str!='a';str--); /* reset (go back to beginning of s) */ } for(;--shiftnum>=0;) /*same is for(k=0;k<shiftnum;k++) */ str++; return *str; } These are just a few examples of unexpected output. If you could, try playing about with it yourself to see what is wrong, I would be very grateful. ThanksThese are just a few examples of unexpected output. If you could, try playing about with it yourself to see what is wrong, I would be very grateful. ThanksCode: $gcc -o caesar caesar.c $./caesar Enter string: HELLO THERE Now enter the shift value: -1 GDKKN SGDQD� $ ./caesar Enter string: hello Now enter the shift value: -8 96ddgo
http://cboard.cprogramming.com/c-programming/91273-caesar-cypher-printable-thread.html
CC-MAIN-2015-32
refinedweb
393
76.15
CodePlexProject Hosting for Open Source Software Hi all, objects of the type GdalRasterLayer currently offer only 'TransparentColor' to get a transparent raster image. If assigning 'Color.Transparent' the image will be set to 100% transparency (except the image's background - that remains black). What I'm missing is the possibility to assign a reduced rate of transparency. Is that an outstanding issue or did I oversee something? Regards, Martin Martin, Transparency can be easily added. In GdalRasterLayer.cs do the following: Add this to the global variables: protected float transparency = 1.0f; Add this to the accessors: public override float Transparency { get { return (int)(1 - transparency) * 100; } set { transparency = (float)(1.0 - ((double)value / 100.0)); } } And replace the following in GetPreview() and GetNonRotatedPreview(): g.DrawImage(bitmap, new Point(bitmapTL.X, bitmapTL.Y));with// apply transparency float[][] pnt = new float[][] {new float[] {1, 0, 0, 0, 0}, new float[] {0, 1, 0, 0, 0}, new float[] {0, 0, 1, 0, 0}, new float[] {0, 0, 0, transparency, 0}, new float[] {0, 0, 0, 0, 1}}; ColorMatrix clm = new ColorMatrix(pnt); ImageAttributes att = new ImageAttributes(); att.SetColorMatrix(clm, ColorMatrixFlag.Default, ColorAdjustType.Bitmap);///////// for GetNonRotatedPreview() g.DrawImage(bitmap, new Rectangle((int)dblLocX, (int)dblLocY, bitmap.Size.Width, bitmap.Size.Height), 0, 0, bitmap.Width, bitmap.Height, GraphicsUnit.Pixel, att);///////// for GetPreview() g.DrawImage(bitmap, new Rectangle(bitmapTL.X, bitmapTL.Y, bitmap.Size.Width, bitmap.Size.Height), 0, 0, bitmap.Width, bitmap.Height, GraphicsUnit.Pixel, att);Then you can set GdalRasterLayer.Transparency to a value between 0 and 1 which defines the transparency amount.Hopefully you can make sense of this.Dan Hi Dan, it's running well - thank you very much. The value has to be set as a per cent value (1 - 100). Regards, Martin Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://sharpmap.codeplex.com/discussions/56900
CC-MAIN-2017-22
refinedweb
330
52.46
My travels with WDFThe continuing story of a boy, his dog and their discovery of the world outside...of WDM. Evolution Platform Developer Build (Build: 5.6.50428.7875)2009-04-15T17:30:00ZWhere have you been?!<p>Well,. :)</p> <p!</p> <p. :)</p> <p>Not that the blog has been <em>really </em!</p> <p>Thanks everybody! And keep writing those drivers!</p> <p> </p> <p>-patrick</p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man’ve had the time, let’s hear the feedback.<p><font size="3">I hope those of you who are using the new Visual Studio 2011 with the WDK integration are having some fun, even if it’s just been wandering around and <em>playing.  </em>What, if anything, are you <u>not</u> liking about it?  Anything that’s unintuitive or confusing?  What do you like about it, but would like to maybe see tweaked?</font></p> <p><font size="3">Now that I finally have my VS 2010 plug-ins to fixed to work with VS 2011 (real subtle changes as all these two plug-ins are wholly internal tools) and have been using it exclusively for the last few weeks.  In particular, how have the KMDF / UMDF templates been working out?</font></p> <p><font size="3">So, either use the “send feedback” option or leave a comment – I’ll post them so long as you’re not screaming at me *g*.  I have got direct line of access to the team(s) involved in these projects, so your feedback won’t go off in to the vapor. :)</font></p> <p>Now listening to: <em>Caravan </em>by Rush</p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man, you can kind of see why we’ve been quiet around here…<p> </p> <p>If you’re watching what’s going on at //BUILD (and why wouldn’t you be?), you may have seen that UMDF now supports hardware access (via ports and registers) and interrupt handling! If you haven’t seen it, <a href="" target="_blank">you can go read more here</a>.</p> <p>And the other big one. Something I’ve been begging to get done for years and believe me in all my years at Microsoft it’s been one of, if not the most, commonly requested thing I’ve heard at all the WinHECs, Developer Conferences, Trade Association meetings, etc from you guys. – <a href="" target="_blank">Visual Studio integration of the WDK</a>. Even down to templates for <a href="" target="_blank">UMDF</a> and <a href="" target="_blank">KMDF</a> drivers! You know what that means, native Intellisense (<a href="" target="_blank">not that we haven’t figured out how to do that already</a>), but more importantly, the one thing I couldn’t give you (partly because I couldn’t get the plug in to work across all VS editions) integrated WDK build support. :D</p> <p>There are still a couple of other very cool things in the UMDF / KMDF space we’ll get to start talking about pretty soon, and for sure I’ll start digging more in to these two as we move closer to the Windows 8 / WDF 1.11 release timeline.</p> <p>So the blog will be spooling back up with some fun posts in the coming months.</p> <p>Let me know if you have any questions or comments!</p> <p> </p> <p>-p</p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man sounds of silence.<p>Greetings one and all (who may be only one left reading at this point *g*).  Some cool things are happening here which is why I’ve been heads down working.  And I can’t tell you about them <em>just</em> yet – pretty soon though..  But one of the cool things I can tell you about, if you haven’t seen it already - the Kinect SDK has been released.  And I’m very proud to say – it’s using a KMDF driver.</p> <p.</p> <p>We’ll be churning up the blog posts again, so stay tuned for more tidbits and tales from the driver side.</p> <p>Thanks again everybody – and keep firing questions my way.</p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man WDF book is back in print…<p> </p> <p><a title="" href=""></a></p> <p><a title="" href=""></a></p> <p> </p> <p>We now return you to your normally scheduled programming. <img style="border-bottom-style: none; border-right-style: none; border-top-style: none; border-left-style: none" class="wlEmoticon wlEmoticon-smile" alt="Smile" src="" /></p> <p> </p> <p>Hope you all had a great holiday season and safe New Years!</p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man active are you?<p>Was pondering a few questions the last few days.</p> <ol> <li>How many of you UMDF writers use <a href="" target="_blank">ATL</a> in your drivers? </li> <li>If you do, do you have more of a kernel driver background or a user mode background? </li> </ol> <p>Since UMDF works on an object model very similar to COM, the ATL does provide a lot of useful object lifetime management functions and interface implementation (<a href="" target="_blank">BEGIN_COM_MAP / COM_INTERFACE_ENTRY</a>).  </p> <p.</p> <p>Currently playing: <em>nothing</em></p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man inquiring minds want to know<p>A follow up question was posed from my last post about the WDF book no longer being in print and I thought it would be better to do a post so it will show up in RSS feeds;</p> <p><font color="#ff0000" size="4" face="Vrinda"><strong><em>Q</em></strong></font>: <em>Any idea what triggered this? I realize that our technology area doesn't sell as much as others, but seems strange to pull the only book on the topic.</em></p> <p><em>Inquiring minds...</em></p> <p><em>-scott</em></p> <p><em><strong><font color="#ff0000" size="4" face="Vrinda">A</font></strong>: We’re digging around to find an answer for you all.  The digital versions are still available though so all hope is not lost.</em></p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man up!<p> </p> <p>Sharing a cross post that is of interest to you WDF users;</p> <p><a title="" href=""></a></p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man keeps on slippin’<p>Greetings one and all!  Been heads down on some work around here and haven’t been able to post an update in a while.</p> <p.  </p> <p>One of the things we are always using in office and test labs are the <a href="" target="_blank">Object Tracking and Reference History tracking features in UMDF</a>. . ;)</p> <p>One easy way to get that extra coverage for negative path testing is to use <a href="" target="_blank">Application Verifier with its fault injection engine running</a>. .</p> <p>I know there are some good questions out there, so please feel free to speak up!  I love hearing from you all with good case studies or blog fodder.</p> <p> </p> <p><font size="1">Now Playing – Charlie Parker <em>Easy to Love</em></font></p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man funny thing happened on the way to the keyboard<p>So as I was crafting some UMDF HID driver code for your consumption, and I was working with one of you (and you know who you are *g*) during some of that time frame, we sort of discovered everything already exists between my earlier blog posts and some of the WDK samples.  So rather than reinvent the wheel and the internal combustion engine, let’s just dump all that information in a blog post.</p> <p>First up, I touched upon <a href="" target="_blank">secure HID stacks and how to get your UMDF HID Collection Filter driver working in them</a> and a little on how to use impersonation.  And in retrospect, the 30 second and little technical extension in that post are really all that was / is required for getting a basic driver up and running.  Make a note that not all HID stacks require secure connection, so you kind of have to do some debugging or digging around to determine those requirements for your own driver.  Sorry. :)  </p> <p>The second part was to alleviate the issues with <a href="" target="_blank">Windows session spaces</a> in order to access some Win32 desktop APIs.  As one of the driving forces to craft a HID Collection filter in UMDF is to have access to those APIs, that session work around is vital.</p> <p>The last part is proper implementation required for a secure HID stack is using <a href="" target="_blank">IImpersonateCallback::OnImpersonate</a> method and its invocation from the <a href="" target="_blank">IWDFIoRequest::Impersonate</a> method.  Turns out one of the existing samples available in the WDK has a really good implementation; src\usb\osrusbfx2\umdf\fx2_driver\impersonation\</p> <p>Put all that together and you can easily craft a HID collection filter driver for the majority of HID devices. Remember though, UMDF cannot be used to filter keyboard and mouse collections.  But you can go crazy writing drivers for a load of other HID devices.</p> <p>As always, feel free to ask me questions if you run in to any road blocks.</p> <p><font size="1">Now Playing – Rush <em>Show Don’t Tell</em></font></p><div style="clear:both;"></div><img src="" width="1" height="1">pat.man and 1 are not just numbers, they’re spaces!<p>Again, thanks to our intrepid explorer Ilia S. for helping uncover some more traps in the UMDF HID Collection Filter journey.</p> <p>Two things to keep in mind as you’re creating your driver:</p> <p>1. UMDF drivers are hosted in a Session 0 based executable on Vista and above -</p> <p> - and -</p> <p>2. A lot of Win32 APIs for controlling desktop and user interactions on the desktop are exclusive to user sessions (anything 1 and above) because Session 0 technically doesn’t have a desktop and is intended to be a protected space.</p> <p>So those of you who are wanting / trying to get HID Collection Filter drivers running in UMDF so you can do some magic on the desktop, you will need to do some of the following:</p> <p>A Session 0 overview for you with some tips, hints and tricks on how to “exit out” to user desktop sessions.</p> <p><a href=""></a></p> <p>But focus more on this portion;</p> <p>Leverage Windows 7, Windows Vista and Windows Server 2008 capability: </p> <ul> <li>Use client or server mechanisms such as remote procedure call (RPC) or named pipes to communicate between services and applications. </li> <li>Use the WTSSendMessage function to create a simple message box on the user’s desktop. This allows the service to give the user a notification and request a simple response. </li> <li>For more complex UI, use the <a name="OLE_LINK3"></a><a name="OLE_LINK4"></a>CreateProcessAsUser function to create a process in the user's session. </li> <li>Explicitly choose either the Local\ or Global\ namespace for any named objects, such as events or mapped memory that the service makes available. </li> </ul> <p>And a good tutorial on how to make an application using those patterns (this does require the Platform SDK)</p> <p><a href=""></a></p> <p>And finally for today, I am still working on getting you some sample code for a UMDF HID Collection Filter driver, but life decided it needed some attention last week so I’m a tad behind on that. :)</p> <p>And a thank you to one of our developers, Kumar for pointing me to those Session 0 links and tips!</p> <p><font size="1">Now playing – Transatlantic <em>The Whirlwind</em>  (yes, I do love my prog rock)</font></p><img src="" width="1" height="1">pat.man to avoid getting a HID to the head (a guide to making a UMDF - HID collection filter)<P>First of all, HUGE thanks to Ilia S. for helping to track down this little trap. I’m glad we finally got your driver up and running! Nothing like having an 8 hour time difference to slow things down. :)</P> <P>For those of you who like the 30 second version;</P> <P>If you need to use <A href="" target=_blankImpersonation</A> in a UMDF driver, regardless of being a filter or a function driver, you cannot have <A href="" target=_blankAutoForwardCreateCleanupClose</A> set to <STRONG>WdfTrue</STRONG>. For filter drivers this means you must invoke <STRONG>AutoForwardCreateCleanupClose</STRONG> with <STRONG>WdfFalse</STRONG> <EM><U>AFTER</U></EM> you call <A href="" target=_blankSetFilter</A> and <EM><U>BEFORE</U></EM> you call CreateDevice and then follow all the rules for <A href="" target=_blankbalancing Create and Close</A> and handling Impersonation (same link as above for impersonation). Easy enough? :)</P> <P>For those of you who like long winded technical posts, I am going to do a write up on how to get a UMDF driver to sit on top of a HID collection.</P> <P>And for those of you who like to work ahead and start digging around on your own, you first need to match your INF section for HWID to HID\VID_<<EM>nnnn</EM>>&PID_<<EM>nnnn</EM>>&<EM>XX</EM>_<EM>xx</EM>&&Col<EM>NN. </EM>Second install WUDFRd as an upper filter to that HWID. Now, if your collection has <A href="" target=_blankenforced secure read</A>, you’re going to need that little tip above.</P> <P>I’ll work on getting the first part of the write up done this week and posted by early next week. I need to do a little code work on a sample for you faithful readers. *thumbs up*</P> <P>As always, fire off any questions you have!</P> <P><FONT size=1>Now Playing – Stone Temple Pilots <EM>Trippin’ on a hole in a paper heart</EM></FONT></P><img src="" width="1" height="1">pat.man’s time to party!<p>Fresh from the oven, Windows Driver Kit (WDK) 7.1.0 is ready for your consumption!</p> <p>You can get it from the Download Center;</p> <p><a title="" href=""></a></p> <p>Or off Microsoft Connect (remember, you have to be signed in to your connect account).</p> <p><a title="" href=""></a></p> <p>Now, go forth and code!</p><img src="" width="1" height="1">pat.man’s a bird, it’s a plane..<p>Just wanted to make sure you all saw this great guest post on Doron’s blog from Jake Oshins.</p> <p><a title="" href=""></a></p> <p>That’s all for today.</p><img src="" width="1" height="1">pat.man<p>There have been a couple of asks recently, in various forums, on how to build drivers using Visual Studio. I thought since I had shown you how to make <a href="" target="_blank">better use of Visual Studio as an IDE for driver writing</a>, I better share the last yard of how people are integrating the WDK build environment(s) in to Visual Studio.</p> <p>For those of you who don’t already know about or already use them, there are two commonly used tools to integrate the WDK build environment in to Visual Studio.  Please note that these are not Microsoft supported solutions by any means and any support questions should be directed towards the tool owners, not me.  I don’t use these tools internally.  I have some other voodoo that I use. :)</p> <p><a title="" href=""></a></p> <p><a title="" href=""></a></p> <p>Speaking of Visual Studio, I’ve been using VS 2010 for the last few weeks and there is one SWEET addition I have to share with the group (again, some of you probably know about it already) -</p> <p><strong><em>You can undock source tabs and float them across your desktop(s)! </em></strong></p> <p>That makes for a happy Patrick when I can span code modules across my two 24” monitors all from one IDE. *g*  All of that awesomeness goes along with some nice performance increases in the IDE and the pretty new blue UI.  I’m sure there’s some other big ticket stuff the Visual Studio Team is just as jazzed about with 2010 edition. ;)</p> <p>But don’t just take my word for it, go give it a spin for yourself;</p> <p><a title="" href=""></a></p> <p><font size="1">Now Playing – Transatlantic <em>My New World</em></font></p><img src="" width="1" height="1">pat.man’s okay to assert yourself, just be careful how forcefully you do it. (Op. Ed.)<P>We. </P> .</P> <P>Both sides of the debate sound reasonable no matter what your preference for using ASSERTs. And I’d agree with you 50% of the time on either side. :)</P> <P>Now, there are two extra data points that may or may not sway you to either side of this discussion and these lend themselves to reaffirming that there is something to break every rule;</P> <OL> <LI. :)</LI> <LI.</LI></OL> <P.</P> <P>See, there is no such thing as a “failed experiment” in science. ;)</P> <P><FONT size=1>Now Playing – Jonathan Coulton <EM>Skullcrusher Mountain</EM> </FONT></P><img src="" width="1" height="1">pat.man here you thought I had a cloaking device<P>Got a good one to share with the group;</P> <P><STRONG>Q: Can I make a kernel mode driver that opens a handle and talks to a UMDF based driver in another device stack?</STRONG></P> <P><STRONG>A:</STRONG> Why yes you can! Quite simply done provide you <A href="" target=_blankfollow all the rules</A>. This is a variant of the UMDF initiated cross stack communication we talked about a while ago, but KMDF, being a DF (driver framework) supports essentially the same <A href="" target=_blankIoTarget</A> model that UMDF does.</P> <P><FONT size=1>Now Playing – Rush <EM>Far Cry</EM></FONT></P><img src="" width="1" height="1">pat.man’re gonna need a bigger stick!<p>So, some of you may recognize Eliyas’ name from WinHECs and various other driver dev presentations, but guess what he’s done now?!  <a href="" target="_blank">He’s become a blogger.</a></p> <p>Go on feel free to hound him about your USB driver problems. :)</p><img src="" width="1" height="1">pat.man Queues and You<P>I got an interesting question via mail recently, “Do I really need to create a queue object in my UMDF driver?” Well, this is another one of those, “only if” type questions. </P> <P>For instance - Only if your driver is not handling any I/O from a top edge method which results in the I/O manager being involved in talking to the UMDF driver for those operations. At that point you don’t need to create a queue object. </P> <P>Example - </P> <P <A href="" target=_blankhere</A>) you can simply create and format a request, and then send that request to the target.</P> <P>Should you want to send requests asynchronously and need a completion callback, you can still do so with out requiring a queue. The driver will have to implement a <A href="" target=_blankIRequestCallbackRequestCompletion::OnCompletion</A> method. And during the packing of the request to submit, invoke the request’s <A href="" target=_blankSetCompletionCallback</A> prior to invoking the FormatRequest<EM>xxx</EM> method. </P> <P>Once the lower driver has completed the request, the OnCompletion method will be invoked. Voilà.</P> <P>There’s one little caveat around UMDF completion routines that <A href="" target=_blankI talked about a while ago</A> and I should bring it up here; you can only define one OnCompetion method per device object.</P><img src="" width="1" height="1">pat.man stack communications<P>The subject of how to talk to another device stack has come up again, and since I only briefly touched on it a <A href="" target=_blankyear ago</A>,.</P> <P>So the basic building blocks are as follows;</P><PRE class=csharpcode> <SPAN class=rem>//</SPAN> <SPAN class=rem>// Add these to your base class</SPAN> <SPAN class=rem>//</SPAN> IWDFIoTarget m_ExternalTarget; HANDLE m_ExternalHandle;</PRE> <P.</P> <P.</P><PRE class=csharpcode> IWDFFileHandleTargetFactory * pFileHandleTargetFactory = NULL; <SPAN class=rem>// .......</SPAN> <SPAN class=rem>// abstracted device init code for brevity</SPAN> <SPAN class=rem>// ........</SPAN> <SPAN class=kwrd>if</SPAN> (SUCCEEDED (hr)) { m_FxDevice = fxDevice; <SPAN class=rem>//</SPAN> <SPAN class=rem>// We can release the reference as the lifespan is tied to the </SPAN> <SPAN class=rem>// framework object.</SPAN> <SPAN class=rem>//</SPAN> fxDevice->Release(); } <SPAN class=rem>// Open the device and get the handle.</SPAN> m_ExternalHandle = CreateFile ( DeviceStack, <SPAN class=rem>// path device stack to open</SPAN> GENERIC_READ | GENERIC_WRITE, <SPAN class=rem>// these flags are driven more by the target stack.</SPAN> FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, <SPAN class=rem>// You must open the handle with this flag</SPAN> NULL); <SPAN class=kwrd>if</SPAN> (INVALID_HANDLE_VALUE == m_ExternalHandle) { DWORD err = GetLastError(); TraceEvents( TRACE_LEVEL_ERROR, TEST_TRACE_DEVICE, <SPAN class=str>"%!FUNC! Cannot open handle to device %!winerr!"</SPAN>, err); hr = HRESULT_FROM_WIN32(err); } <SPAN class=kwrd>if</SPAN> (SUCCEEDED(hr)) { hr = m_FxDevice->QueryInterface(IID_PPV_ARGS(&pFileHandleTargetFactory)); <SPAN class=kwrd>if</SPAN> (FAILED(hr)) { TraceEvents( TRACE_LEVEL_ERROR, TEST_TRACE_DEVICE, L<SPAN class=str>"ERROR: Unable to obtain target factory for creating FileHandle based I/O target %!hresult!"</SPAN>, hr); } } <SPAN class=kwrd>if</SPAN> (SUCCEEDED(hr)) { hr = pFileHandleTargetFactory->CreateFileHandleTarget(m_ExternalHandle, &m_ExternalTarget); }</PRE> <P>You’ll need to release the reference to the <STRONG><FONT face="Courier New">pFileHandleTargetFactory</FONT> </STRONG>object, either by a SAFE_RELEASE macro, direct call to the Release(); method, or by using a <A href="" target=_blankCComPtr /CComQIPtr</A> class wrapper on the object on declaration. You’ll also need to clean up the Windows file handle when it is no longer needed. And <STRONG><EM>IF </EM></STRONG>the IWDFIoTarget object's lifetime scope is narrower than the device object's lifetime, you will need to call <FONT face="Courier "><STRONG><A href="" target=_blankDeleteWdfObject</A></STRONG> </FONT>method on <FONT face="Courier New"><STRONG>m_ExternalTarget.</STRONG></FONT></P> <P>At this point, you have your remote target object to use in sending I/O to that secondary device stack.</P> <P>What I’ll cover next is going to be some hefty theory on when you should and don’t need to use queue objects and how to send I/O to an external stack based on whether you have a queue or not. Fun! ;)</P><img src="" width="1" height="1">pat.man cleaning<p>The fourth part of Abhishek’s online UMDF debugging tutorials is up now. :)</p> <p><a title="" href=""></a></p> <p>Next up, those of you playing with Windows 7 may have noticed that the “Found New Hardware” wizard is gone!  This in itself is not that big a deal, but it does mean that some of the WDK sample installation instructions are no longer valid.  </p> <p>Simple fix at the end of the day, the device will come up as an “Unknown Device” within Device Manager, choose “Update Driver” from the right click context menu for that device and then follow the directions to point it to the driver(s) you need!</p> <p>That’s all for today, those of you in the U.S. enjoy your 4th of July weekend.  And just remember, light fuse then <em>run away</em>.</p> <p> </p> <p><font size="1">*Currently Playing - Dream Theater<em> Light Fuse and Get Away</em></font></p><img src="" width="1" height="1">pat.man, the Musical part 3<P>Like all good stories, sometimes the 2nd act is the hardest to follow. In this case it was simply a matter of other things coming up rather than writers block. :)</P> <P>What kicked me in the rump to get this next part up was a few more questions about driver writing, WDK integration into Visual Studio and Intellisense. One of the very cool features of Visual Studio 2008 is the ability to make a project from existing code.</P> <P>The end result can look something like this (warning, this is a 1920x1200 resolution PNG *g*)</P> <P><A href="" target=_blank<IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; MARGIN-LEFT: 0px; BORDER-TOP: 0px; MARGIN-RIGHT: 0px; BORDER-RIGHT: 0px" title=teaser border=0 alt=teaser</A> </P> <P>I’ll walk you through this using the “a picture is worth a thousand words” mentality;</P> <P><A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; BORDER-TOP: 0px; BORDER-RIGHT: 0px" title=Part1 border=0 alt=Part1</A> </P> <P><A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; BORDER-TOP: 0px; BORDER-RIGHT: 0px" title=part2 border=0 alt=part2</A> </P> <P>Select “Next”</P> <P>Then point it to the root of the WDK installation. Pointing it to the root rather than right at the src folder will save you some intellsense headaches later on, sure you’ll get inclusion of all the WDK folders, but it’s actually not a bad thing in the end;</P> <P><A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; BORDER-TOP: 0px; BORDER-RIGHT: 0px" title=part3 border=0 alt=part3</A> </P> <P mce_keep="true"> </P> <P mce_keep="true"> </P> <P>Select “Next”</P> <P>Then;</P> <P><A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; BORDER-TOP: 0px; BORDER-RIGHT: 0px" title=part4 border=0 alt=part4</A> </P> <P>Select “Finish” from here. We’re still not going to use Visual Studio to do any building for now, partly because I refuse to give up all command windows in my life and also because I’m still tinkering with all the settings required to use the WDK build environments under Visual Studio.</P> <P>Now your Solution Explorer will probably look like a mess of files listed alphabetically. Simply hover over the “Show All Files” button at the top and toggle it until you get an Explorer like layout of the samples sources structure.</P> <P><A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; BORDER-TOP: 0px; BORDER-RIGHT: 0px" title=part6 border=0 alt=part6</A> </P> <P mce_keep="true"> </P> <P>Now you have pointer to the first picture in the blog post. :)</P> <P>As a side note to any Microsoft Employees who may actually read my blog – Yes, you can do the same thing with our Source Control Enlistments (I have a VS project for each branch I work out of). And if you have one of the Source Control plugs in for Visual Studio, you can also have all the check in / check out / history that our source control provides.</P><img src="" width="1" height="1">pat.man Debugging talks online<p>Our wondrous Abhishek did a series of Debugging UMDF driver talks and I’m happy to say we now have the first three live on line!</p> <p>In these he covers some of the basics; where to find and how to use WDFVerifier, how to use some of the UMDF Debugger extensions and some basic debugging UMDF scenarios.</p> <p><a href=""></a></p> <p>As always, any feedback or questions you guys may have, feel free to ping me!</p><img src="" width="1" height="1">pat.man filtered for added UMDF flavor.<P><STRONG>UPDATED: 9-March-2010 Astute readers noted that I had the incorrect driver load order when talking about the UmdfServiceOrder directive. :) It <EM><U>IS</U></EM> left to right reading and the <EM><U>LEFT</U></EM> most driver is the lowest driver.</STRONG></P> <P mce_keep="true"> </P> <P>In my <A href="" target=_blankprevious post about Filter Drivers</A>, I mentioned that this time I would focus on a more UMDF centric stack. This one is pretty simple.</P> <P>For UMDF only based driver stacks, the single most important directive is the <STRONG>UmdfServiceOrder</STRONG> INF directvie. As I mentioned last post, that directive is a left to right list determining load order for the drivers contained in the device stack. With the left most element in that list being the the lowest driver loaded.</P> <P>Here are two examples – First a UMDF two driver stack with an upper filter;</P><PRE class=csharpcode>[<mydriver>_Install.NT.Wdf] UmdfService=UMDFFunction,WUDFFuncDriver_Install UmdfService=UMDFFilter,UMDFFilter_Install UmdfServiceOrder=UMDFFilter, UMDFFunction</PRE> <P>Now to install a lower filter, simply flip the order in that directive;</P><PRE class=csharpcode>[<mydriver>_Install.NT.Wdf] UmdfService=UMDFFunction,WUDFFuncDriver_Install UmdfService=UMDFFilter,UMDFFilter_Install UmdfServiceOrder=UMDFFunction, UMDFFilter</PRE> <P>The last tidbits here are; for a device stack that only contains UMDF drivers, there is <EM><STRONG>no need</STRONG></EM> to add any of these settings in the INF;</P><PRE class=csharpcode><STRIKE>[></STRIKE> </PRE> <P>And also as before, the filter driver needs to be a good citizen on the stack, pass on requests it does not own to the next driver and don’t touch anything unless you must. The previous code samples I’ve posted all still apply, but the basics are; <A href="" target=_blankGetDefaultIoTarget</A>, <A href="" target=_blankFormatUsingCurrentType</A> and <A href="" target=_blankSend</A>.</P> <P>There’s no need to change any code for a basic filter driver based on its load order (upper or lower filter). As you get in to more advanced driver functionality you may find a need to change default behaviors. For those cases, you can always send us mail. :)</P> <P>There was also some questions about why would you want or need to be an upper or lower filter driver. That one gets in to a bit more of a “it’s up to you” but the core logic that applies to <A href="" target=_blankWDM / KMDF drivers</A> applies to UMDF drivers (save the parts about bus drivers *g*). </P><img src="" width="1" height="1">pat.man UMDF filtered drivers<P>Filter drivers have come up in conversations recently (both internal and external), so I wanted to take some time here to address some of the issues that were brought up with regards to UMDF filter drivers and how to make them. Note: I’m not going to cover all the configuration dynamics available for UMDF filter drivers in this one post simply to avoid the risk information overload and clutter, so expect a couple of posts on this subject.</P> <P>Let’s start with some basics, we’ll simply demonstrate how to configure and load a UMDF Filter as an upper filter driver to a kernel mode driver (KMDF or WDM) in the same INF-</P> <P>As some of you may remember from <A href="" target=_blankmy earlier posts</A>, a UMDF filter driver should use the <A class="" href="" target=_blankSetFilter();</A> property in the DeviceInitialization routine. This tells the Framework to do a couple of things for us automatically:</P> <OL> <LI>Send I/O for which the Filter has not registered a callback for to the next logical driver in the stack. Example: Your filter driver registers a DeviceIoctl callback method, but not read or write. As a result your filter driver will only see IOCTLs</LI> <LI>Automatically forward file create cleanup and close so there is no need to invoke the AutoForwardCreateCleanupClose method.</LI></OL> <P>That covers the really basic code required at driver / device initialization time. Now we need to add the pertinent sections to the INF file. As we’re going to focus on being an upper filter for this example, you need to make the following modifications to your INF (in addition to the normal UMDF specific sections);</P><PRE class=csharpcode>[<mydriver>.NT.Wdf] UmdfService = <SPAN class=str>"<mydriver>"</SPAN>, <mydriver>_Install UmdfServiceOrder = <mydriver> [> </PRE> <P>So a couple of things to note here -</P> <P>One is the UmdfServiceOrder directive, this is load order specific. I’ll address a more UMDF centric stack in the next post, but for those of you who like to work ahead, this is a left to right reading list to determine stack order.</P> <P>Second thing is the registry key addition section. As WUDFRd is the kernel mode transport service for UMDF drivers, WUDFRd is the upper level filter driver to a kernel mode driver.</P> <P>Now that your driver is loaded as an upper filter, what should your driver do in order to be a good citizen in the stack? Short answer is, as little as possible. You perform the same steps for handling PnP and Power as you would for any basic UMDF driver (one that lets the framework handle the heavy lifting for those two). You register your I/O callback routines and build your queues just like any normal UMDF driver, the only additional requirement here is to forward I/O requests as required.</P> <P>For this example, let’s just demonstrate using a fairly simple pass through driver. How you actually implement this is up to you, but the basic requirements here are - as your driver receives the request on the queue, it needs to pas pass it on to the next driver in the stack. So first you need to get the default I/O target (next driver in the stack). Next, format the request using the same type and then finally, send the request. A condensed version would be something along these lines;</P><PRE class=csharpcode> IWDFIoTarget * kmdfIoTarget = NULL; <SPAN class=kwrd>this</SPAN>->GetFxDevice()->GetDefaultIoTarget (&kmdfIoTarget); Request->FormatUsingCurrentType(); hr = Request->Send ( kmdfIoTarget, 0, <SPAN class=rem>// 0 Submits Asynchronous else use WDF_REQUEST_SEND_OPTION_SYNCHRONOUS</SPAN> 0);</PRE> <P>I also have some more basic code <A href="" target=_blankhere for you to peruse</A>.</P> <P>So now we have a basic UMDF upper filter driver on top of a kernel mode driver.</P> <P><FONT size=1>Now Playing <EM>Capitals v. Rangers</EM></FONT></P><img src="" width="1" height="1">pat.man
http://blogs.msdn.com/b/888_umdf_4_you/atom.aspx?Redirected=true
CC-MAIN-2015-18
refinedweb
5,865
59.33
5, 2008 at 12:59 PM, Dan Thomas<geoobject@...> wrote: > > Crossposting question as it has elements both for SMW and SMF. This > seems like a core need, but I'm have difficulties. > > I'm importing into SMW the Dublin Core metadata vocabularies: [1] > Dublin Core Metadata Elemental Set (DCES) and [2] DCMI Type Vocabulary > (DCMI-Type). DCES has a property called Type for which I want to > enumerate from the set of controlled DCMI-Type Vocabulary values > (e.g.; Data set, Text, Image, Service, Software, etc.). > > I've set up the page: MediaWiki:Smw_import_dc for DCES properties. Cool! Can you point me at it or e-mail it to me and I'll add it to sandbox-semanticmediawiki or semanticweb ? I think it's a generally useful common vocabulary for SMW, much like > Can somebody explain how: > > a) import the DCMI-Type enumerated set? Mention it in MediaWiki:Smw_import_dc or maybe MediaWiki:Smw_import_dcmi-type. I'm not sure what the best type to use is, probably start with string. import doesn't go out and query the remote RDF to find permitted values. So you have to manually put [[allows value::Collection]], [[Allows value::Dataset]], [[Allows value::Event]] on the properties using it, (use a template). To annotate the association, you could make pages like [[Collection]], [[Dataset]], [[Event]], and on each add [[Equivalent URI::]] , etc. > b) allow system users to use Semantic Forms to select a DCMI-Type > value that will produce correct RDF output? The allows value should mean something to Semantic Forms. But I see that you might run into have problems with the values turning into rdf:resource="&wiki;Collection. Do you have a URL for a sample Dublin Core RDF with DCMI types in it? While tinkering with this, I think I found a bug in SMW: if the imported type is a string, SMW still creates a link to a page for the value. See , that's just imported as mbox_sha1sum|Type:String yet it's a link. -- =S Page > [1] > [2] I'm working on a local history-genealogy project. Some of the people have disputed, yet unresolved, birth dates. (Such as Francis Hopkinson a signer of the U.S. Declaration of Independence.) Some of these are supported by sources that are considered credible such as a contemporaneous diary of a mid-wife vs. a contemporaneous family bible entry, there are also numerous instances of repeating the disputed birth dates in otherwise reliable secondary sources such as grave markers and newspapers. This is a pattern that applies to many other situations (such as date of moving from one location to another, attendance at schools, titles or appointments, etc.) These dates and other attributes are highly significant and are likely basis for future queries. Any suggestions as to how to handle this? (If this request should be directed to another group or service, please feel free to forward it.) Thanks in advance for any help. Michael Skelly Bordentown Heritage I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/semediawiki/mailman/semediawiki-user/?viewmonth=200906&viewday=20
CC-MAIN-2017-22
refinedweb
537
56.96
Hi! I was testing typecasting of objects to different types. Discovered that if you try to cast an object to other types except string you get a PHP notice that it isn't allowed. I'm using PHP 5.3, and it would be nice if PHP would call my __toString function BEFORE trying to typecast. This way one can, e.g. have an own string class and use the variables in the same way as you would use an ordinary string variable. Some code: <?php class CastTest { private $value; public function __construct($value) { $this->value = (string)$value; } public function __toString() { return (string)$this->value; } } $test1 = new CastTest('10'); $test2 = new CastTest(11); echo "Test 1 [String: $test1, Int: " . (int)$test1 . ", Float: " . (float)$test1 . "]\n"; echo "Test 2 [String: $test2, Int: " . (int)$test2 . ", Float: " . (float)$test2 . "]\n"; ?> Result (excluding the PHP Notices): Test 1 [String: 10, Int: 1, Float: 1] Test 2 [String: 11, Int: 1, Float: 1] Nice to have result (no notices): Test 1 [String: 10, Int: 10, Float: 10] Test 2 [String: 11, Int: 11, Float: 11] Can this be fixed, or can a more generic magic function be added that provides this functionality? Regards, Thomas Vanhaniemi Grokbase › Groups › PHP › php-internals › August 2009
https://grokbase.com/t/php/php-internals/098vwwnhnz/typecasting-objects-with-tostring-function
CC-MAIN-2022-27
refinedweb
207
77.47
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Hi all, I am starting to work on my own pet language and I joined in to make it public here, as I like very much to follow the very informative discussions hosted at Lambda the Ultimate. I hope to contribute more in the future. Though I have to confess that proofs and lambdas are not my forte, I did not really look into these since my student days. So what I can do at the moment is to answer the questions you may have on dodo as it gets better defined and evolves over time. This is the URL of the dodo home page on Sourceforge. Can I ask my own questions? I would like to know if there is an easy way to make a parser for my language so that I don't get nasty surprises. I spotted a couple of ambiguities in the syntax already, hopefully that's fixed now. But automating the process would be nice. Try googling for "parser generator" - there are zillions of them. Which one you choose can depend a lot on what language you want to use to host the parser. One recent LtU thread which contains relevant links is Parsers that allow syntax extensions. The classic parser generator is YACC, and the GNU version of that is Bison. Two well known Java ones are Antlr and JavaCC. There's also a Toy Parser Generator for Python. This list is ludicrously incomplete. Thank you for the links! Looking into it. the Gold Parser. I've been using it a lot and it has got some nice graphical tools...plus engines for lots of programming languages. Unfortunately I won't be able to use Gold as it is available only on Windows. Never mind, I can look at the other options here. One of the parser frameworks that may be worth a look is the Spirit parser framework - The Spirit home page Spirit is a C++-based framework which is (imo) very easy to use. I say that and I'm really not a C++ coder at all - I'm just a copy-and-paster in C++. But, that's all you need to be in order to use Spirit. It's *really* fun to use. You can write your grammar (in what is *very close* to BNF syntax) directly in C++. Have a look at a few examples in the apps repository, and that should give you a good feel for things. When you say "I spotted a couple of ambiguities in the syntax already", I wonder if you are looking for a tool that will take a context-free grammar and tell you if it is ambiguous. Unfortunately, it is undecidable whether a context-free grammar is ambiguous, but it is often possible to prove that some particular grammar is unambiguous; some parser generators will do this for you as part of generating the parser, but others will not. For YACC and Bison, if you don't use any precedence declarations in your input grammar and the tool runs successfully without printing out any messages about "shift/reduce conflicts" or "reduce/reduce conflicts", then the input grammar was unambiguous; if they do print such messages, then the grammar may or may not be unambiguous. It is a matter of opinion whether support for ambiguous grammars in a parser generator (with no compile-time ambiguity test) is a bug or a feature; presumably, the authors of the Spirit framework think it's a feature, or they wouldn't put "capable of parsing rather ambiguous grammars" in the first paragraph of their website. My own view is that a programming language should have a formal grammar definition, and that an unambiguous LALR(1) context-free grammar is the form of formal grammar definition which will give implementors the most flexibility in correctly implementing the grammar. (LALR(1) is the name for the set of grammars that YACC and Bison can prove to be unambiguous.) Other possibilities for formal grammar definitions would include ambiguous grammars augmented with precedence information, intended to be fed into YACC or Bison (but be careful here; YACC precedence declarations don't always mean what you might expect), or Parsing Expression Grammars (which have disambiguation information built into the grammar, in a formally specified way; but again, be careful, the disambiguation isn't always what you might expect). Well I take it for granted that the syntax of the language should not be ambiguous. My reasoning is that if it is difficult for the computer to parse it, how will it be for the person who reads the code? I think I will simply try writing a BNF grammar for the language. That should help weed difficulties out. Should I use left recursion for LALR(1)? For example: arguments ::= arguments "," argument | argument It seems several of the tools (Spirit, Gold) aim at being BNF compatible so it is a good starting point. I'd use left or right recursion as appropriate for the subproblem at hand. For example, in parsing 1:2:3:4:5:[] I'd use right recursion. YACC can handle left as well as right recursion, although there may be a fixed limit with right recursion due to parse stack size. A common source of ambiguity is having both IF predicate THEN consequent and IF predicate THEN consequent ELSE alternative. There are two distinct concepts. IF..THEN.. makes some code optional. When the predicate evaluates to false the code is skipped. An IF..THEN.. will always be a statement, even if the consequent is an expression; what would its value be when the predicate is false? IF..THEN..ELSE.. choses between alternatives. If the consequent and alternatives are both expressions, then the IF..THEN..ELSE.. could also be an expression. If you use distinct initial keywords, perhaps OPT and ALT or WHEN and IF, you will not need a precedence rule to disambiguate elses because they will no longer dangle. For example IF p1 THEN IF p2 THEN a1 ELSE c is either OPT p1 THEN ALT p2 THEN c2 ELSE a2 or it is ALT p1 THEN OPT p2 THEN c2 ELSE a1 There is then neither a conflict in the grammar, nor a paragraph in the language manual describing dangling else, nor a defensive coding convention to reduce errors. Thank you for your suggestion. I will go through the syntax to check again, but I don't think dodo suffers from that problem. In dodo "if" is used as a ternary operator if (x > 0) sqrt(x) else exp(x/2) The conditional branching uses "test", which is similar to "switch" in C test { x > 0 {return sqrt(x)} default {return exp(x/2)} } There can be other constructs that are not safe but I have none on the top of my head. I am using Aurochs as parser at the moment, you can find a preview of the grammar on the dodo web site. The reason that parser works for me is that you can see immediately the structure of your AST (it outputs a XML tree). The reason I do not recommend anybody using it is that when you output a XML tree, the only error you will see is "Parse Error (0)" when you do something wrong. Nice heh? Anyway check out the current grammar, I was able to parse one of the examples with it. What I described as "qualifiers" correspond to my understanding of the word "mixin". I realise so far I did not define them in great detail, but can you determine if they are the same yet? This is my description of dodo qualifiers, is that different from mixins? ------ A qualifier can be specified to add properties to the class that are the same for all classes that respond to that qualifier. For example the qualifier Motorised could add the attributes engine, wheels and speed to the class. A qualifier is defined like a class with an attribute parent that is an instance of the class it is applied to. For example, if I want to define a Inhabited qualifier for the house I could write: qualifier Inhabited { link People[] inhabitants Drawing draw(Canvas support): Drawing result parent.draw(support) loop foreach(inhabitant in inhabitants) { .result = inhabitant.draw(result) #Drawing can act as Canvas } return result . } This qualifier replaces the draw function in House with its own that draws the inhabitants as well as the house. That supposes Inhabited is applied to a class that has a draw function, of course. I just realised, for the example I pasted earlier I am using a loop for something that should not really necessitate one. It comes like that: loop foreach(inhabitant in inhabitants) { .result = inhabitant.draw(result) } Which means if there are three inhabitants: .result = inhabitants[3].draw(inhabitants[2].draw(inhabitants[1].draw(result))) Which is conceptually the same as: .result = draw(inhabitants[3], draw(inhabitants[2], draw(inhabitants[1], result))) How do other languages denote that? I can imagine a function that takes 3 arguments: the list, the function and the initial value. But I don't know how to call it so that it makes sense. ...the big three are Map, Filter, Fold. I gather the one you are looking for is fold (which can either be left-to-right or right-to-left). In Standard ML, fold left could be defined recursively as: fun foldl f u nil = u | foldl f u (x::xs) = foldl f (f(u,x)) xs A simple use is to sum the contents of a list: val mysum = foldl (fn (x,y) => x+y) 0 [1,2,3,4]) Or to put it in your terms: f - function to apply u - initial value nil or x::xs - the list I guess I should also add that you can use currying to define functions in terms of fold. So for example, a sumlist function could be defined as: val sumlist = foldl (fn (x,y) => x+y) 0 val mysum = sumlist [1,2,3,4] OK so map and filter seem to be like SELECT and WHERE in SQL: SELECT f(p.*) FROM Person p, Toothpaste t WHERE p.job = "Salesman" AND t.taste = "strawberry" AND p.toothpaste_id = t.id We already talked about fold, except it is sometimes called reduce and the initial value can be part of the list. I will keep fold and a separate initial value. I also read about unfold which looks like a C for loop, I think I can ignore it. What may be missing is some kind of join to combine two lists. Concatenation's actually just another fold if you've got a typical list structure: you use cons and then the second list as the initial value. You might want to look at the zipWith family though - map is effectively zipWith1. I am not looking at concatenation, I am looking at combining the two lists (like vector product): [ a, b, c ] * [ 1, 2 ] = / \ | a*1, b*1, c*1 | | | | a*2, b*2, c*2 | \ / If I flatten the matrix it is effectively a concatenation of [ a, b, c ] * 1 and [ a, b, c ] * 2. So I guess your remark was relevant to my problem after all. The combination operator would probably be pair(a, b). For example I could generate all the possible pairs (salesperson, strawberry toothpaste). Then I could filter again, like the join filter in the SQL statement above: p.toothpaste_id = t.id In Haskell, you can easily write all this using list comprehensions. E.g. your example (assuming properly typed persons and toothpaste lists): [(p,t) | p<-persons, t<-toothpastes, job p==Salesman, taste t==Strawberry] Not sure if it helps, but here's an ML version to define the two: fun map f nil = nil | map f (x::xs) = (f x)::(map f xs) fun filter f nil = nil | filter f (x::xs) = if (f x) then x::(filter f xs) else (filter f xs) As Chris said, you're looking for fold. See the paper Tutorial on the universality and expressiveness of fold. Note that this paper is also mentioned in the Getting Started thread, which is full of references worth reading if you're designing a language. The fold version of your example would look something like this in Scheme: (fold (lambda (result inhabitant) (draw inhabitant result)) result inhabitants) or in Haskell: foldl (\result inhabitant -> draw inhabitant result) result inhabitants In both cases, the first argument to fold is a function which is applied in turn to each element of 'inhabitants'. The second argument is the starting value for 'result'; I'm assuming result is defined in the enclosing scope. I am not sure why you use a lambda expression here, is it just so that you can swap the arguments of draw? Is there no "foldr" or such that works with arguments in reverse order? Anyway, if I do not take into account the arguments order what you wrote is like: (fold draw result inhabitants) Which is close to my notation in dodo: fold inhabitants.draw(result) Thank you for the links to read. I hope to ingest some of this before I have an interpreter out! :-) There's a "foldr" in Haskell, but it doesn't do what you're asking. Foldr is a "right-associative fold" - it traverses the list from right to left, building up a value by applying the binary operation between each pair of elements. Anton's example could've been written (in Haskell) like this: foldl (flip draw) result inhabitants Flip is a Haskell built-in that swaps the order of two arguments, eliminating the need for a lambda. How are continuations used, and why should I care about them? Are they different from passing a message containing a callback function to a dynamic agent? Sorry for asking again questions that were asked before, the answer is not clear for me yet. In fact, I came with a style for continuation-passing style lambda functions that I quite like and I don't really see the use for standard lambda functions any more. Are there drawbacks in keeping only CPS lambdas in dodo? I would make sure that the continuation invokation is tailing so I imagine it can be optimised easily. If desired, normal named functions are still there to use. According to Anton: "For the most part, first-class continuations should be seen as a tool-building feature, not something you'd regularly use directly in an ordinary program." Does the same apply to CPS? Would you recommend continuations to build a control flow such as a parser algorithm or web site navigation (applications that make also use of abstract state and possibly versionning)? As James McCartney pointed out in a followup to my comment, the same doesn't necessarily apply to CPS. As my reply to James indicated, I was really thinking more of call/cc when I wrote that. Whether to recommend using continuations (or CPS) depends on the language, the application, the requirements, the available alternatives. A number of web frameworks, for example, use call/cc under the hood so that the user's code doesn't have to be in CPS. As James McCartney pointed out in a followup to my comment, the same doesn't necessarily apply to CPS. As my reply to James indicated, I was really thinking more of call/cc when I wrote that. That seems to imply that the use of call/cc is not required to practice Continuation Passing Style. I was wondering about it, and I will need to make sure of it for my documentation work. Back to the textbook it seems... That seems to imply that the use of call/cc is not required to practice Continuation Passing Style I am wondering if prototyping would give me algebraic data types for free. Consider the dodo code: struct linkedlist linkedlist emptylist def linkedlist new struct: DodoValue head linkedlist tail emptylist . def emptylist new linkedlist: DodoValue head { throw UndefinedValue } #explicit error . def foo new linkedlist(head "foo") def bar new linkedlist(head "bar", tail foo) Of course I need some type matching to do anything useful with it. Does it make sense to put capabilities and I/O in the same bag? My reasoning is that if a function requires capabilities, then it is likely to perform I/O or to interface with the host system. The converse looks reasonable as well: all I/O requires capabilities. The compiler can consider any function that requires capabilities as impure and optimise it differently from pure functions. I hope that would not incur any serious loss of performance. I would like to use a special signature for associative functions, since the associative property can be leveraged for automatic parallelisation. Associative functions verify the equality: f(x, f(y, z)) = f(f(x, y), z) If I define the signature as (using haskelly syntax) assoc: `a `a -> `a Then I can use the property to allow passing any number of arguments N to an associative function, with N >= 2. Eg. max(12, 5, 0, 1001, -3) The compiler or VM can decide to group the arguments by pair as it sees fit, the result will be same since it is associative. So far so good. Now what happens if I pass a single argument to my associative function? Logically the compiler would error out, but looking at the associative functions I can think of, it may actually make sense to return the single argument itself. Examples: (+ x) = x + 0 = x (* x) = x * 1 = x max(x) = max(x, -infinite) = x min(x) = min(x, +infinite) = x Is there any theorem that associative functions have a neutral element such that f(x, neutral) = x? I am specially interested in the single argument question in relation to fold, if I am correct that means that using fold with an associative function does not require a seed. Is there any theorem that associative functions have a neutral element such that f(x, neutral) = x? No. You must assume that. In particular (and as will become relevant) monoids are a set (type) with a binary operation that satisfies three independent axioms: associativity and left and right unit. I am specially interested in the single argument question in relation to fold, if I am correct that means that using fold with an associative function does not require a seed. Lists are the free monoids (the binary operation is ++ and the unit is [], Haskell syntax). foldr is the unique monoid homomorphism from the free monoid (lists) to any other monoid (this is what freeness means). (f : A -> B is a monoid homomorphism (A,+,0) -> (B,x,1) if f(a+b) = f(a)xf(b) and f(0) = 1 where +, x, 0, 1 are the binary operations and units respectively and are meaningless despite their name (though if f(x) = ex then that is a monoid homomorphism with the usual interpretations of the operations and constants)). foldr :: (a -> b -> b) -> b -> [a] -> b doesn't look like a monoid homomorphism. This is because we use the associativity of (++) to fix the lists into a right associative form, i.e. [1,2,3] = [1]++([2]++[3]) = 1:2:3:[]. foldr :: (a -> b -> b) -> b -> [a] -> b fold :: (m -> m -> m) -> m -> (a -> m) -> [a] -> m fold mappend mempty f = foldr (\x xs -> f x `mappend` xs) mempty foldr = flip . fold (.) id -- or using the old instance for functions for Monoid fold :: Monoid m => (a -> m) -> [a] -> m fold f = foldr (mappend . f) mempty foldr = flip . fold -- (edit: not quite so slick) fold can use whatever parenthesization it feels like though assuming mappend is associative and mempty is its unit. That settles it then. Thank you for monoids. In my case associative is all I want, so I will introduce a special fold that returns the seed if the list is empty instead. How would you name such a beast? If you drop the identity element you get what is called a semigroup. Non-empty lists should be the free semigroup and foldr without the nil argument should be their fold. Sure. But since I am only looking at the associativity property, and the only notable property of semigroups seems to be associativity, I don't see what I can leverage from knowing that it is a semigroup. Maybe you were trying to tell me something about the free semigroup? Crikes. I just read Fold - HaskellWiki which introduces foldr as foldr f z [] = z -- if the list is empty, the result is the initial value z foldr f z (x:xs) = f x (foldr f z xs) -- if not, apply f to the first element and the result of folding the rest Am I wrong in suggesting that it should be instead foldr f z (x:[]) = f z x -- if the list has only one element, apply f to the first -- element and the initial value z foldr f z (x:xs) = f x (foldr f z xs) -- if not, apply f to the first element and the result of folding the rest since the "initial element" introduces a singularity, not being necessarily part of the set of values returned by f? I don't like that foldr can return z even if f has no neutral element. In short, yes - it loses far too many useful properties. Consider foldr (:) [] as a trivial example that ought to work. In short, yes - it loses far too many useful properties. Consider foldr (:) [] as a trivial example that ought to work. Sure that is convenient. But what about that other simple example? def sign a b = if a < 0 then if b < 0 then +1 else -1 else if b < 0 then -1 else +1 foldr sign 0 [-3 5 -2] = +1 foldr sign 0 [-10] = -1 foldr sign 0 [] = 0 lf the input list is empty, I get a value that is neither +1 or -1! Sometimes that may be useful, but it does not look correct as the default. just do foldr sign +1 [-3 5 -2] = +1 foldr sign +1 [-10] = -1 foldr sign +1 [] = +1 instead ...which is part of the return set of the function. Agreed, that is what a good programmer would do. But I can easily see an inexperienced programmer making the mistake and introduce weird bugs. I guess that's only me. I will probably follow the convention, with a paragraph in the documentation for the caveats. Unless your language supports currying (like haskell and ML) I think it makes more sense to have the function argument positioned last, as its often a lambda, and considerably larger than the other arguments. Something like: foldr [] someList (acc x => acc + x) someList is often a rather large expression itself often. Further if your language supports higher order functions then it supports currying. I would have no problem with foldr :: (a->b->b,b) -> [a] -> b in a language that doesn't favor currying. Often function definitions are of the form f = foldr c n and it would still be nice to compose them. foldr :: (a->b->b,b) -> [a] -> b Its my experience that the function argument often is the largest, but thats obviosly dependent on programing style. With support for currying i meant part synctax suport, and more importantly if the fold function is defined as a curried function. foldr :: (a->b->b,b) -> [a] -> b is a curryed function, so I think a language that includes that in its prelude is by definition a language that favors currying, at least in some regard. My language does not support currying. The syntax is strictly to my taste, which is a bit backwards at times. For example fold is a special construct, to use it just do a normal function call with fold in front of the list and seed in front of the initial element: fold seed fold someList + seed 0 or int.add(fold someList, seed 0) Lambda expressions are a pain to write though, due to static typing ala Java. In that case wouldnt be better even to have some more looplike syntax for folds? perhaps like: fold (list edit: hm that doesnt look right edit: hm that doesnt look right Haskell is statically typed, and lambda expressions are not a pain to write there. Do you mean lack of type inference so you have to add a type signature, perhaps? Yes. It is more like Java than Haskell. I do want type inference, but am a bit scared of it. (Wow, is every reply of mine in this thread going to start with a structure from abstract algebra?) Well the issue is that every monoid homomorphism is a foldr, but not every foldr is a monoid homomorphism. I goofed slightly earlier in my definition of foldr (I editted it earlier today). The way my foldr definition in terms of fold works is by being a monoid homomorphism to the endofunction monoid, we then just apply 'z' to the composed list of actions. If foldr :: (a -> b -> b) -> b -> [a] -> b then foldr f is actually an action of [a] on b. I.e. b is a [a]-Set. Every monoid induces an action on itself. The upshot is that b does not have to be a monoid, 'z' is not the identity of anything. There is a feature in dodo which I call a conversion function, denoted ->(SomeType) label in a class definition. That is a function that is called implicitely by a runtime mechanism when the variable is used at a point that requires SomeType. ->(SomeType) label An application of this is futures (message), in which the conversion function does the work of fetching the value when it is used. Eg message FutureInt { Cap channel ->(int) fetch = service!channel.FetchInt() } ... FutureInt x = someFunction(y) return x + 5 #use as int In some ways that is similar to subclass polymorphism, which says that I can use int at a point that requires Number (int is a subclass of Number). The difference is that it is destructive, ie. for all intents and purpose the type of x is int in the expression x + 5. Also conversion is bidirectional, two types can convert to each other if it takes the programmer fancy. The last point is interesting when looking at covariance and contravariance. I was reading "Is a cow an animal?" linked in that other thread, and (after letting the fact that class type parameters are abstract sink in) I realised that using conversion, there is a possible extension to the idea. Imagine that I have a special type of cow, WetCow, which is identical to a Cow except that it only eats wet grass. As per the document a cow eats Grass, that is definitive and cannot be changed. Enters conversion. It is easy enough to convert Grass to WetGrass, you just need to sprinkle some water on it. So if I add the conversion function to Grass: class Grass is PlantFood { ... ->(WetGrass) wet = self.sprinkledWithWater } class WetGrass is Grass Now WetCow can be a subclass of Cow and still type-safe: class WetCow is Cow<type FoodType = WetGrass> When a WetCow is fed grass, the conversion function magically enters in action to sprinkle it with water and the wet cow eats it. Do you see drawbacks to this approach? Is it really safe? A system like this could be type-safe, or it could fail to be type-safe. It could also make it very easy to accidentally introduce non-termination. So it's a fine direction, but I think you need to be a little bit careful. If you haven't already, I would take a close look at Scala's implicit conversions, which do something very similar to this. I added some code samples on the home page. HuffmanTree is converted from Q, warmer is converted from Python and PingPong was written from scratch. I found it quite straightforward to convert from other languages, which probably shows that dodo is already quite versatile in its current form. I do not have a compiler or interpreter yet, I even did syntax colouring by hand! So do not put too many hopes in using dodo for development.
http://lambda-the-ultimate.org/node/1824
crawl-002
refinedweb
4,739
60.65
UFDC Home | African Studies | Food and Agriculture | Women in Development | International Farming Systems myUFDC Home | The Development of village-based sheep production in West Africa Item menu Print Send Add Description Standard View MARC View Metadata Usage Statistics Page Turner Thumbnails Page Images Standard Zoomable Search TABLE OF CONTENTS Front Cover Title Page Preface Table of Contents Abstract Introduction Chapter 1: Forming the group Chapter 2: Husbandry Chapter 3: Feeding Chapter 4: Veterinary care and... Chapter 5: How the group opera... Chapter 6: Training and record... Chapter 7: Productivity and... Chapter 8: Problems and soluti... Appendix 1: Contracts Appendix 2: Recording forms Bibliography Back Cover Citation Permanent Link: Material Information Title: The Development of village-based sheep production in West Africa a success story involving women's groups : training manual for extension workers Creator: Food and Agriculture Organization of the United Nations Place of Publication: Rome Publisher: Food and Agriculture Organization of the United Nations Publication Date: 1988 Language: Physical Description: vi, 90 p. : ill. ; 21 cm. Subjects Subjects / Keywords: Sheep -- Breeding -- Handbooks, manuals, etc -- Africa, West ( lcsh ) Women -- Training of -- Handbooks, manuals, etc -- Africa, West ( lcsh ) Women -- Employment -- Handbooks, manuals, etc -- Africa, West ( lcsh ) Genre: bibliography ( marcgt ) handbook ( marcgt ) non-fiction ( marcgt ) Notes Bibliography: Includes bibliographical references.: 25527777 ( OCLC ) UFDC Membership Aggregations: African Studies Collections Food and Agricultural Sciences Women in Development International Farming Systems Full Text "AZ" /14 r q .. Training manual for extension workers THE DEVELOPMENT OF VILLAGE-BASED SHEEP PRODUCTION IN WEST AFRICA A success story involving women's groups FOOD AND AGRICULTURE ORGANIZATION OF THE UNITED NATIONS Rome 1988 .. 1988. .. Preface The peoples of West Africa have traditionally depended on small scale village based systems of animal production as a source of income generation and money reserve. Additionally animal protein plays a critical role in the nutrition of the peoples of West Africa. Lack of resources coupled with trypanosomiasis challenge, so prevalent in the region, have' dictated small scale livestock production based largely on small ruminants and poultry. The traditional production system is a nil-input uncontrolled management system in which in most cases, the animals scavenge on whatever by-product or refuse feeds are available in the villages. Herded or tethered grazing is largely in the control of the village women and/or children. Production efficiency is normally very low and characterized by very high levels of lamb and kid mortality, very poor growth rates and fluctuating reproductive rates. Against this background FAO/UNDP in conjunction with the Government of Togo initiated in 1980 a pilot village based livestock development project in the Kara region of North Togo. The project has been very successful in tackling the major aspects that constrain village based livestock production in North Togo. On the one hand, it has clearly demonstrated that the introduction of simple animal husbandry technologies can have a very marked effect on animal production. Ewe productivity for example has been increased from 7 kg to over 30 kg lamb per ewe per year and the project now embraces more than 15, 000 ewes and 3 5 0 farmers. A key element in the success of the project has been the development/extension strategy followed which not only emphasised simple technologies and easy to understand training methods but also focused on specific target groups. Women's groups have played a big part in the focus and success of the project. The purpose of this publication is to summarize and highlight how the women's groups were formed, how they operated and the direct benefits which accrued to them, in the improvement of their livelihoods as a consequence of their participation in this development programme. Clearly the manual must embrace technical aspects of livestock production as well as the range of issues pertaining to role and development of women's groups. To facilitate more easy reading these separate themes are presented in the text in different colour marked sections. This manual has been .. produced jointly by the Human Resources, Institutions and Agrarian Reform Division (ESH) and the Animal Production and Health Division (AGA) in FAO: It is hoped that it will serve as an effective example of how women's groups can be focused and guided in the development of village based livestock production. R. Moreno Rojas H. A. Jasiorowski Director, ESH Director, AGA .. Contents Page Introduction I 1. Forming the group 3 2. Husbandry 9 3. Feeding 29 4. Veterinary care and breeding 39 5. How the group operates 45 6. Training and records 51 7. Productivity and economics 53 8. Problems and solutions 60 Appendix 1: Contracts 65 Appendix 11: Recording forms 81 Bibliography 89 .. Key Words Sheep, goats, breeding, nutrition, management, humid tropics, West Africa, women's groups, formation, training, extension. ABSTRACT This publication on the role of women's groups in the development of village based sheep production in West Africa summarises the development approach, methods used and achievements realized in an FAO/UNDP project in North Togo. On the one hand, it clearly identifies the manner in which women's groups were formed, operated and succeeded in developing worthwhile sheep production systems. The manual also highlights the extension strategies and simple animal husbandry techniques on which the project was successfully developed. .. Introduction In many traditional societies it is unacceptable for individual women to own sheep. Yet sheep can provide women with a substantial and much-needed income. The answer may be for a group of women to jointly own and look after a flock of sheep. Women's groups for sheep production are successfully operating in northern Togo, West Africa'. This booklet is written for groups of women and extension workers concerned with the development of sheep production in tropical countries. It contains both technical and socio-economic information about the establishment and operation of women's groups for sheep production. Each section is written for two audiences. On coloured paper, the main points are presented in a very simple form for village women who have little reading ability. On white paper, more detail is given for extension workers and other interested persons. 'Organised through the Project of North Togo (PNT), financed jointly by the Togolaise Government and the United Nations Development Programme (UNDP-BIT TOG/78/009 and UNDP-FAO TOG/81/001 ). PNT initiated improved sheep husbandry by individual men and groups of men in 1981. The first group of women started keeping sheep in 1983. At the time of the visit by the author in December 1986, there were a total of 274 sheep flocks in the project, of which seven belonged to groups of women. The principal people responsible for the success PNT are Mr G. Van Vlaenderen, Mr Yodoufai Noukoum and Mr Luc Vandeweerd. .. .. 1. Forming the group WOMEN NEED MONEY To buy vegetables To pay for medicines To pay school expenses IT IS HARD TO MAKE MONEY NOWADAYS BY Brewing beer Weaving SHEEP MAKE MONEY One woman cannot have sheep because She has no money to buy sheep Her husband disapproves A group of women can have sheep because The project will lend them money and tell them what to do Their husbands are happy to see them earn money STARTING A GROUP To make a group, between 5 and 20 women get together. They must all live near each other (see Fig. 1, p. 5). A field officer from the project talks to the women about keeping sheep. .. WHY WOMEN NEED SHEEP PRODUCTION GROUPS In traditional societies in northern Togo, women have certain obligations to their families. They are expected to provide the vegetables for the family diet, while the men are expected to provide the meat or main dish. If insufficient vegetables are grown by the family. the women must earn money to buy vegetables from the market. Women also need money to pay for medicines for the family and school equipment for the children. Besides household duties of caring for children, carrying water, preparing grain and cooking, the women are expected to weed and harvest their husbands' fields and to do many of the tasks associated with animal production. Women sell yams and other surplus agricultural products, but this money must be given to their husbands. They earn money to fulfill their own monetary obligations by collecting wood, brewing millet beer and weaving cotton cloth. The income they get in this way is equivalent to a wage rate of less than I JSS 0.0 3 per hour'. One of the main reasons why sheep production groups for women Were started in northern Togo was to enable the women to more easily earn money for themselves. It would be socially unacceptable for individual women to own sheep, but group ownership seems generally approvable. The technology for semi-intensive sheep production in the area had already been tried and tested with many individual men and groups of men. HOW GROUPS ARE FORMED Some groups of women who start keeping sheep have previously been a cooperative group with another function such as vegetable production. sewing or learning to read. New groups may be formed by women who live in neighbouring houses and many of them are related by birth or marriage. 'Financial information is given in US dollars. West African Francs were converted using it rate of 300 CFA USS 1 .00. .. FIG. 1 The village chief and other important people come to the meetings. They decide where to build the shelter and night enclosure. The women visit other sheep production groups. The women sign contracts with the project. The field officer brings equipment and tells the women how to build the shelter and night enclosure (see Fig. 2, p. 7). Husbands and sons help the women to build the fences. The sheep arrive. The women take it in turns to took after the sheep (see Fig. 3, p. 9). The field officer continues to visit to check that everything is O.K. .. The number of women in a group depends on the number of women who fall into the same kinship and geographical group. The larger the group, the less frequently the women have to work with the sheep. On the other hand, for a given flock size, the fewer women there are, the higher the income per woman. The recommended maximum number of members in a sheep production group is about twenty. Above this number the group is too big for a feeling of togetherness. The average size of the women's sheep production groups in northern Togo was ten members. Women's groups for sheep production may be formed because the women have seen another sheep production unit and want to do likewise. Other groups start because sheep production is recommended to them by the staff of a development project or other extension worker. Whatever the reason why the women start sheep production, it is essential that they are well-motivated, are prepared to work. They must have a reasonable knowledge of both the problems they are likely to face and the potential benefits otherwise they are likely to become discouraged during the first two years when the financial rewards are small. The establishment of a sheep production group takes several months and includes: -Meeting to discuss with the women the possibility of forming a group to keep sheep; -Meetings with influential people who will be involved (e.g village chief, local officer of national extension service) to obtain approval; -Meeting to give the women more information about the work involved in keeping sheep and the benefits they would get: -Visits of the women to other groups; -Formal meeting to create the group: signing of contracts; -Distribution of equipment, building of shelter, distribution of food supplements; -Visits of the women to production units to exchange experiences and see development; .. FIG. 2 .. -Seminars for individuals with responsibility within the group, e.g. president, secretary. REACTION OF SOCIETY TO GROUP Women have little public influence in society in many tropical countries. If a women's group is to succeed it needs the support of men. For instance, if a group want to establish a forage plot to grow good quality food for their sheep, they need a suitable area of land. Land is allocated by men, so that unless there is a good relationship with these men, the women will not be able to have a forage plot. Each individual woman is subordinate to her husband or father, and needs his approval before starting a new venture. This has not been a problem in northern Togo. Husbands have been enthusiastic about their wives looking after sheep because of the benefits this gives to the family. In many cases husbands and sons have helped the women construct the shelter and night enclosure for the sheep. if the flock is formed by each woman bringing one or more ewes, the women need money from their husbands to buy these animals. The formation of a women's group will probably arouse suspicion in the village leaders and more wealthy farmers. It is important that the approval of these influential individuals is obtained, and that the aims of the women and the benefits of communal sheep production are explained to them. A symbol of membership such as a T-shirt gives public recognition of the group and helps social acceptance. As women's groups become established and successful, they increase in status and strength, and the women become a powerful cohesive team. The group of women rind it easier to liaise with outside organisations than they would as individuals. It may be socially unacceptable for a male extension worker to talk alone with one woman, but if he meets with a group of women there is little problem. CONTRACTS It is essential that the members of the groups know exactly what the project will provide and what their own obligations are. This .. FIG. 3 2. Husbandry IN THE OLD SYSTEM OF KEEPING SHEEP Sheep wandered about in the dry season (see Fig. 4, p. 11), so Dogs chased them People stole them They were run over by cars and lorries They ate other people's vegetables In the rainy season the sheep were tethered, so they ate little food, they got wet and cold. The sheep house was dark and dirty (see Fig. 5, p. 11). .. information is all specified and written down in the contracts. Contracts for the initial equipment, annual services, selected rams and breeding ewes can be found on page 65, 69, 73 and 77, respectively. Even though many of the members of the groups may not be able to read. they will be able to find someone who can read the contract for them. Having a contract helps to prevent fraud. For instance, a group might be tempted to kill the ram from the project for a feast, and then tell the project that the ram had died. This is prevented by clauses (Articles 9, 11 and 3) in the Ram contract which state that "The G roup must not sell, give away, lend or eat the selected rams without the prior approval of the Project", "If a selected ram dies, the Group must call in the animal production field assistant who will conduct a post-mortern to identify the cause of death", and "The Group must pay back to the Project the total sum of the present contract in the case of theft, loss or death of the rams due to their negligence". BRIEF DESCRIPTION OF FARMING SYSTEM The natural vegetation of the land around Kara is savanna with forests along the water courses. The average rainfall is about 1 100 mm per year. The rainy season is from April to October and the dry season from November to March. The average area of land per family is 15 to 30 ha, but the area cultivated is much less than this, only about 2 ha. The uncultivated land is available for grazing by sheep and other animals. Land is cultivated in rotation: a family may cultivate a field for one or several years before allowing it to become fallow. Then a new area is cleared and cultivated. Before-the start of each rainy season, the village elders gather to allocate areas of uncultivated land for the coming season. The traditional crops are sorghum, millet, yams and beans. Cultivated fields are usually not fenced. TRADITIONAL METHODS OF SHEEP AND GOAT HUSBANDRY Before the project began, some individual families kept sheep and goats. For sheep, the average flock size (including lambs) was 10. .. FIG. 4 CV. FIG. 5 Therefore the animals were not healthy and did not produce many good lambs. IN THE NEW SYSTEM OF KEEPING SHEEP Every day the sheep graze and a person looks after them (see Fig. 6, p. 13). .. The flocks were kept in smalt dark mud-walled houses at night. These houses had little ventilation and the door was too small to allow a person to enter, so that the interior was damp and never cleaned. in the dry season the sheep and goats were allowed to graze freely over both the uncultivated land and the cultivated fields. In the rainy season the animals were tethered to prevent them damaging the growing crops. They were usually tethered under a thatched shelter close to the house, and fed with grass and other material cut from the savanna. The performance of the animals in the rainy season was very poor as their food was of low quality and they were subject to numerous diseases. Typically the weight of a ewe fell from about 22 kg at the beginning of the rainy season to only 17 kg at the end. Lamb mortality was more than 50%. IMPROVED HUSBANDRY In the semi-intensive system of sheep and goat production advocated by the project, the traditional methods of husbandry are greatly improved: -The animals graze with supervision during the day. At night they are put in an enclosure and have free access to a well-ventilated shelter: -The floor of the night enclosure and shelter is swept clean every day: -Essential minerals are provided every day in the form of a salt block; -Food supplements are given in the critical period at the end of the wet season; -Clean drinking water is provided every morning and evening; -The whole flock is routinely vaccinated, sprayed against external parasites and drenched against internal parasites; -Ram lambs not required for breeding are castrated. In some flocks mating is confined to certain seasons of the year, and rams selected for high growth rate are used: -In some groups forage is grown for supplementary feeding to the flock. Sheep are preferred to goats for this improved system because they are much easier to shepherd. Also, economic analyses show that the .. INK", RPM FIG. 6 " Their house is clean and airy They eat minerals They eat concentrate food when there is not much grass They drink clean water The extension officer provides medicines to make sure the sheep are healthy " Breeding is controlled Some groups grow food for their sheep (see Fig. 7, p. 15) Therefore the sheep are healthy and produce many good lambs. HOW TO LOOK AFTER THE SHEEP EACH DAY In the morning, look at all the sheep to see if any lambs have been born during the night, and to check that no sheep are sick (see Fig. 8, p. 15). .. output of semi-intensive goat herds in West Africa is more variable then the output of semi-intensive sheep flocks in the same area, so that sheep are preferable because they are less risky. The breed of sheep is the West African Dwarf, otherwise known as Fouta Djalon. DAILY MANAGEMENT In the morning soon after sunrise, the women look at the sheep in the night enclosure. Births are noted, and any sick animals are isolated for inspection by the extension officer. The flock is taken out for grazing and meanwhile the enclosure is swept and the dung put into the manure pit. The food and water troughs are cleaned, and the mineral block is replaced if necessary. Any sheep in isolation are given clean water and food. Before leaving the enclosure, the gate is closed to prevent animals entering it. The sheep are allowed to graze on uncultivated land and on fields after crops have been harvested. Care should be taken to make sure that the sheep do not mingle with other flocks, that they do not go on main roads and that they are not stolen or attacked by dogs or other animals. When a new flock is established the animals are difficult to shepherd because they have come from different sources and do not flock together. But after a few days they recognise each other as members of the same flock and stay together. At midday the sheep are brought back into the night enclosure for about two hours. After this, they again go out to graze until late afternoon. It is important that the sheep graze for at least 8 hours every day. Before the sheep come back into the enclosure for the night, supplementary food is put into the food troughs and the water troughs are filled. YEARLY MANAGEMENT In many parts of the tropics and sub-tropics, there is a dry season and a rainy season. This may result in the ewes conceiving at only one time of year so that the reproductive cycles of most of the ewes are synchronised. On the other hand, the ewes may be able to conceive and lamb in any month of the year although some months are more favourable than others for growth. survival and conception. .. .. In northern Togo, lambs are born in all seasons of the year. The least favourable time is the later part of the rainy season and the beginning of the dry season, i.e. from August to November. The reasons for this are several. The arable fields which produce succulent regrowth in the dry season are not available for grazing during the rainy season so that the flock must graze only on the uncultivated savanna. The nutritive value of grass in the savanna falls as it matures and is very low by the middle of the rainy season. It is not possible at this time of year to burn the grass to stimulate regrowth. In the rainy season the high humidity helps diseases to spread easily, so this is the worst time of year for the health of the animals. It is possible to overcome many of the problems which occur between August and November by giving the sheep supplementary food during this period. There are two possible methods by which the breeding of the sheep may be arranged: year-round mating and seasonal mating. These are discussed more fully in the section entitled Veterinary care and breeding (page 39). If seasonal mating is practised, the rams are allowed access to the ewes only for certain periods, and this means that the lambs are all born within a period of a few weeks. Seasonal mating can make flock management much easier because the lambs can be treated in batches for operations such as vaccination and castration. CONSTRUCTION The night enclosure is located on well-drained firm ground. Stony ground is suitable provided that the wooden posts for the fence and the shelter can be put into the ground. For reasons of security as well as convenience the enclosure is near the houses of the women. The enclosure for a flock of 75 ewes should be 18 metres square. It needs a gate which is about 2 metres wide, and a small pen with an area of between 13 and 2 5 in' (i.e. about 4 or 5 metres square) in which the animals can easily be handled and given treatments such as spraying. The shelter is built in the middle of the enclosure. It must be big enough to accommodate the whole flock when sheltering from the rain. For 75 ewes. the shelter should be 9 in long and 5 in wide. 16 .. No FIG. 9 FIG.1O If there are any sick animals, put them in an enclosure on their own and tell the field officer (Fig. 9). One person takes the flock out to graze (Fig. 10). .. The shelter has a stone or mud wall on three sides. This wall is about 0. 5 m high and protects the animals from rain and wind. Above the wall there is an open space to ensure good ventilation. The roof is made of grass thatch which is cheap and waterproof provided that it is repaired when necessary. This type of structure was satisfactory in the sub-humid climate of northern Togo. In wetter climates it may be necessary for the floor of the shelter to be raised and slatted. Fencing is used for the night enclosure and for the forage plot. The women can make a fence out of local materials, but for speed, efficiency and longevity the Project of North Togo encourages the use of a wire fence. Either barbed wire or wire netting is suitable for the forage plot, For the night enclosure only wire netting is used because the sheep might hurt themselves on barbed wire. Both barbed wire and wire netting are imported from Europe. The price of the double-strand barbed wire is US$0.05 per metre giving a fence cost of about US$0.50 per metre, and the wire netting is US$26.00 per 50 m roll. In Togo, locally-grown teak fence posts are used. These are each 1.2 m long. The posts are set in the ground at 2 m intervals. The women themselves or their families cut the posts and dig the holes. All flocks need food troughs and water troughs, and if there is a forage plot a forage rack is also required. Both the food troughs and water troughs are under the shelter. It is not necessary for all the sheep to drink at one time, but it is important that the food troughs are long enough so that all the sheep can get their heads in. If not, the biggest or greediest animals get more than their fair share of food, while the smaller, thinner or more timid animals do not get enough. Ewes need about 30 cm trough space each. Because they stand at both sides of the trough, this means that the allowance is 15 cm trough per ewe. Rams with horns require more space. Calculations should also include lambs which are old enough to eat solid food. Groups are encouraged to construct a manure pit into which the women put the dung from the night enclosure. This pit has an area of about 2 m x 2 m, and is about I m deep. It is covered by a thatched roof to keep out rain which would remove minerals from the manure by leaching. Manure is valuable as a fertiliser. Depending on what the sheep have been eating, their manure contains about 2% nitrogen. 0.4% .. FIG. 11 FIGA 2 Another person cleans the water troughs and the food troughs, and turns them on their sides (Fig. 1 1). The dung on the floor of the night enclosure is swept up and put into the manure pit. Later it will be used as fertiliser (Fig. 12). I) / / .. phosphorus and 1. 7% potassium, as well as trace elements needed by plants. The women can put the manure either on their forage plot, on their communal vegetable plot or on their husbands' crops. In all cases, the best time to spread manure is early in the rainy season. FEEDING Inadequate nutrition is one of the most serious factors limiting the productivity of sheep in tropical and sub-tropical areas. This means that there is a large increase in production if the feeding is improved. In the Project of North Togo four aspects of feeding are given attention: 1. The animals graze for at least 8 hours every day. This is necessary for the animals to have a sufficiently high food intake, and costs nothing except a little extra work. The women encourage the sheep to cat the better areas of savanna or succulent aftermaths ofcrops. 2. The sheep are given mineral supplementation in the form of a salt block. If the mineral intake of sheep is insufficient their metabolism is reduced and their production is lower. Many areas of the tropics are deficient in one or more minerals, and the easiest method of rectifying this is to provide a block containing all the essential minerals. The sheep are allowed to lick this block every day. Because it contains a large proportion of salt, the sheep consume only a small amount. To keep the mineral block clean and to prevent waste, a string with a piece of wood tied to one end is put through the block and hung from the roof so that the block is suspended about 50 cm above the ground. 3. Energy and protein supplements are often called "concentrates". Concentrates are expensive and therefore are given to the flock only at times of special need. The cheapest concentrates are those produced locally as crop by-products. Cottonseed is fed from August to November at a rate of about 300 g/d per ewe in northern Togo. In other areas different supplements may be available depending on the local agriculture. Possibilities include rice bran, palm kernel, brewer's grains, soybean curd, pineapple waste, etc. Concentrates which could be consumed by humans or .. FIG. 13 FIG.1 4 The gate of the night enclosure is closed to keep out animals (Fig. 13). The sheep graze until midday, then spend two hours in the night enclosure. Then they graze again until the evening (Fig. 14). .. non-ruminant animals (pigs and poultry) are almost never cheap enough to be fed to sheep. The nutritive value of some useful concentrates are given in table 3.1. Concentrates are fed in troughs under the shelter when the sheep return in the evening. 4. Cultivated forages usually have a nutritive value much higher than the plants in the uncultivated savanna. In particular, legumes have a high protein content. The Project of North Togo advocates the integrated cultivation of leucaena (Leucaena leucocephala), pigeon peas (Ca janus cajan) and maize (Zea mnays). Cultivated forages are fed to the sheep in forage racks and otherwise used in a similar way to concentrates. They are especially useful in areas where concentrates are not available or are too expensive. TABLE 3.1. Nutritive value of some potential supplements Digestibility ME CP ofDM Supplement M]/kg DM % % Brewers' grains 11 23 65 Cassava leaves 1 0 23 60 Cottonseed (undecorticated) 13 20 57 Groundnut cake (decorticated) 15 50 70 Groundnut haulm 8 15 Molasses 12 5 78 Palm kernel meal 18 Pigeon pea (pre-bloom) 9 16 55 Pineapple waste 9 3 70 Poultry manure 10 25 Rice bran 11 12 Soyabean meal 12 50 79 Maize' 14 10 87 'Given for comparison FORAGE PLOT Groups which have successfully established a sheep production unit are encouraged to establish a forage plot. In this plot they grow high quality food which is given to the sheep in the evening, instead of concentrates. 22 .. lb rI FIG. 15 Ak FIG. 16 In the evening before the sheep return, fill up the water troughs and put the concentrate in the food troughs (Fig. 15). At dusk shut the sheep in the night enclosure (Fig. 16). .. The plot is located near the night enclosure on an area of land with a well-drained, fertile soil (not too sandy or stony). The recommended size of the plot is a rectangle 12 m x 13 m. A plot this size has a perimeter of 50 m which is a convenient length for the fence as wire netting comes in 50 m rolls. The area of this plot is 156 in2. As the sheep production proceeds and flock size increases, the forage plot can be enlarged by adding plots equal in area to the original plot. In the first year a plot is sown with a mixture of leucaena, pigeon peas and maize. The pigeon pea plants grow rapidly and provide forage in the first year. Also, once the seeds are ripe they can be used for human food. The maize cobs are eaten by humans and the leaves by sheep. Leucaena makes an excellent fodder at the end of the rainy season. It has a high productivity, a good nutritive value, and it contines to grow as a bush. In the second year of production, the forage plot contains leucaena and pigeon peas. In the third and subsequent years, the plot contains only leucaena. One of the first tasks in establishing a forage plot is to spread manure. For the 156 m2 plot, it is recommended that 75 kg farmyard manure is spread and dug into the soil as the rains begin. If farmyard manure is not available it can be replaced by 3.5 kg 15:15:15 artifical fertiliser. Maize should be sown in rows 1 m apart, and each seed within the row should be 40 cm apart and 3 cm deep. The leucaena is sown in the same rows as the maize: two leucaena seeds are sown together half-way between each maize seed. It is important to break the dormancy of the leucaena seeds before they are sown by pouring boiling water on the seeds and leaving them in the water for a whole day. Rows of pigeon peas are sown between the rows of maize and leucaena. Two peas are sown every 40 cm at a depth of 3 cm. The forage plot must be fenced to prevent the young plants being eaten by sheep and other animals. A temporary fence is constructed around the plot, and this fence is moved each year to the new plot. After one year the maize has been harvested and the pigeon peas and leucaena are sufficiently tall for the growing points to be out of reach of the sheep. .. FIG. 17 1. Fence post 2. Straining post 3. Stays 4. Shelter 5. Small pen 6. Gate a /% 3 MUM. FIG. 18 Look at the sheep to make sure they are all present, that they are all eating the concentrate, and that none are sick (Fig. 17). CONSTRUCTION Build the night enclosure in a well-drained place near your houses (Fig. 18). Next to the gate in the enclosure, build a small pen about 4 or 5 metres square. .. The forage plot must be weeded 15 days after it is planted. Once the weeding is done, the leucaena and pigeon peas are thinned to leave only one plant in each place. The maize cobs are harvested a little before they are mature, while the grains are still pasty. At this stage the cobs are ideal for grilling, and while the leaves of the maize plants are still green the whole plant (both leaves and stem) makes an excellent forage for animals. When the pigeon pea plants reach a height of 1 m they are cut back to height of 3( cm, and the branches fed to the sheep. Then after they have grown again to 1.1 m they are cut again, this time to 50 cm. The plants are allowed to grow again and to flower and produce seeds. When the seeds are mature they are harvested for human consumption, and the plants are cut to a height of 70 cm. Provided that the leucaena has reached a height of 1.5 m, it can be cut a little in the first year. In the following years the leucaena is cut in rotation every two months in the rainy season, but not in the dry season. It should be cut to a height of 1.5 in so that the new succulent branches are out of range of the sheep. VETERINARY CARE Veterinary care for the sheep is organised by the project and is included in the contract for services. The routine measures include drenching against internal parasites, spraying against external parasites, vaccination against pasteurellosis and PPR, castration of male lambs and trimming overgrown hooves. The groups pay at a subsidised rate for the medicines needed for these routine measures. Unforseen curative treatments, such as the treatment of wounds or sporadic diseases, are also provided. Internal parasites include roundworms, tapeworms, lungworms, liverfluke and coccidia. If not treated, these parasites cause poor performance and mortality particularly in young animals. All these internal parsites except coccidia are killed by treating the sheep with anthelmintic. Lambs aged 8 months or less are treated four times a year, in the rainy season and the beginning of the dry season .. FIGA19 Build the shelter in the middle of the enclosure. it should be about 9 mn long and 5 m wide (Fig. 19). The fence around the night enclosure is made of wire netting. The fence around the forage plot is either wire netting or barbed wire. Fences need posts at 2 mn intervals. Each post is 2 ma long. Your sheep need water troughs, food troughs and, if you cut forage for them, they also need a forage rack. Dig a manure pit not far from the gate to the night enclosure. The pit is about 2 mn square and 1 mn deep. Build a thatch roof over the pit to keep it dry. Spread the manure on your crops at the beginning of the rainy season (Fig. 20, p. 29). .. (in May, July, September and November). Older sheep aged more than 8 months are treated only once, at the beginning of the dry season (in November). The anthelmintic used in northern Togo is either albendazole (sold as "Valbazen") or fenbendazole (sold as "Panacur"). Both these anthelmintics are applied orally as a drench and control gastrointestinal roundworms, tapeworms and lungworms. In addition albendazole controls liver fluke. There are many other anthelmintics commercially available which control internal parasites. Some are applied by drenching, some in a capsule which must be swallowed, some by injection, and some by pouring a liquid onto the animal's back. Coccidiosis causes deaths in young lambs but seldom affects older sheep. Lambs aged up to 8 months are treated in May, July, September and November with sulphamezathine or other drugs which kill coccidia. The most serious external parasites are ticks, mites and certain flies. Adult ticks feed on the blood of sheep and transmit diseases as well as causing trauma with their bites. Mites are very small parasites which feed on the skin surface and in some cases burrow beneath the skin. Mites cause a disease called mange which is usually more serious in goats than sheep, although head mange is a common problem in sheep. Some flies lay their eggs in sheep, particularly in wounds. These eggs hatch into maggots which eat the flesh causing considerable pain and irritation. The seriousness of external parasites depends on whether or not the flock is isolated from other sheep, goats and cattle. If the flock never mixes with other animals, the sheep are sprayed with insecticide once each month in the rainy season (April to November) but spraying is not necessary in the dry season. If the flock is not isolated, the sheep are sprayed with insecticide twice each month in the rainy season (April to October) and once each month in the dry season (November to March). If a sheep already has mange, spraying is not sufficient, and the animal must be srubbed with insecticide. A knapsack sprayer is convenient for spraying a small flock. The sheep are enclosed in a small pen. The operator stands in the middle of the sheep and turns slowly round spraying all the sheep from above, 28 .. -J FIG.20 3. Feeding For sheep to be healthy and produce good lambs, they must be well-fed. Sheep must graze for at least 8 hours each day (Fig. 2 1, p. 3 1). There must always be a mineral block in the night enclosure for the sheep to lick (Fig. 2 2, p. 3 1). Sheep need concentrates when there is not much grass. Some groups grow forage for their sheep. FORAGE PLOT The forage plot is near the night enclosure, and has good soil. There is a fence round the forage plot to keep out animals (Fig. 2 3, p. 3 3). .. and making sure that their heads are wetted as well as their bodies. Then the operator lowers his spray nozzle and wets the underneath of each animal. The sheep are routinely vaccinated against PPR, otherwise known as peste des petits ruminants or kata. This is a disease similar to rinderpest but is found only in sheep and goats. All animals aged one month or more are vaccinated every six months (in June and December). New flocks are vaccinated on arrival. Later additions to the flock are vaccinated before they are allowed to mix with the other sheep. Many diseases can be prevented by good management. Cleanliness of the night enclosure is important. Every morning the ground is swept and the food and water troughs are cleaned. When the sheep are grazing they are not allowed to mix with other animals. Neither other animals nor people who work with other flocks are allowed into the night enclosure. All sick sheep are isolated, conveniently in an old sheep house. Any new animals joining the flock must be quarantined for one month. The new sheep are shut in an enclosure or animal house not close to the night enclosure, and are given food and water by a person who has no contact with the main flock. The new animals are deticked, drenched and vaccinated. If after one month the new animals are healthy they are allowed to join the flock. If the hoofs of sheep become overgrown foot-rot may develop, particularly in the rainy season. Sheep which are lame with foot-rot are unwilling to walk far while grazing, so they lose body condition and are unproductive. To prevent these problems, overgrown hoofs should be trimmed before the start of the rainy season using either foot shears or a sharp knife. By the veterinary measures discussed above the productivity of the flock is greatly increased. For instance, lamb mortality is about 50% in traditionally managed flocks with no veterinary care, but only about 12% in the flocks within the Project of North Togo. BREEDING The number of rams needed by a flock depends on the size of the flock. The ewe:ramn ratio should not exceed 30:1. This means that for flocks .. FIG. 21 FIG. 22 .. of up to 30 ewes only one ram is needed. If there are 3 1-60) ewes there should be two rams, and if there are 61-90) ewes there should be three rams. The duration of the oestrous cycle in the ewe is about 1 7 days. which means that non-pregnant cycling ewes come into heat every 17 days. The duration of pregnancy is about 1 50 days. After a ewe has lambed she does not return to heat for about one month, so that the shortest possible interval between lambings is 6 months. In practice the lambing interval in the sub-humid tropics is usually between 7 and 10 months. In traditional sheep production systems there is often unintentional selection for low growth rate because the fastest growing male lambs fire the first to be killed or sold. The rams which remain in the flock and sire the future generations are small and have poor growth rates. Selection for improved growth rate is not difficult. The staff of the Project of North Togo buy the best male lambs from the flocks in the project. In order to identify the fastest growth rates it is necessary for each lamb to be identified (usually with an ear tag) and for its date of birth and weight at different ages to be recorded. The groups are encouraged to sell their best three-month-old lambs to the project by a favourable selling price and by including this sale as an obligatory part of the ram contract (page 73). These selected male lambs are kept by the project until they are distributed to the flocks at an age of almost 18 months. The ram contract states that for a flock to receive selected rams, all other males in the flock shall be castrated. The rams are always distributed to flocks distant from their flock of origin so that inbreeding is avoided. Male lambs which will not be required for breeding are castrated. In the Project of North Togo the lambs are castrated at an age of 3 months using a bloodless castrator (known as "Burdizzo"). This method requires two people -one to hold the lamb and one to operate the castrator. The operator fieels the spermatic cords in the scrotum. He places the castrator to nip one cord and holds it for 1 0 to I 5.seconds. Then he does the same with the other cord. To make absolutely sure the lamb will not be fertile, each cord is nipped again. An alternative method of castration is to use rubber rings. This .. FIG. 23 1. Night enclosure 2. Fence 3. Forage plot 4. Future forage plots .. method must be used within two days of birth. The operator uses a special applicator to put a ring over the testes, making sure that both testicles are in the scrotum. The rubber ring cuts off the blood supply to the scrotum which eventually falls off. Unfortunately this method can cause skin wounds which easily become infected with tetanus or colonised by maggots. Another disadvantage is that because rubber rings must be used soon after birth, ram lambs cannot be selected for breeding on the basis of their own growth rate. Where the groups practise uncontrolled mating they buy rams from the project and pay back over two years. In order to prevent father-daughter matings in the flocks, the groups are allowed to keep the selected rams for only two years. After that period these rams are sold or killed and the group buys new rams from the project. In flocks with seasonal breeding the duration of the mating period is six weeks. The appropriate number of rams is put with the flock at the beginning of the mating period. After three weeks these rams are withdrawn and replaced by the same number of different rams. This change is particularly important if there is only one ram with the ewes, because if this ram is infertile none of the ewes would become pregnant. In these seasonally breeding flocks the rams are borrowed not bought from the project. In return for the loan of selected rams, the groups agree to sell their best lambs to the project, OFFICERS Each group elects officers who carry out duties on behalf of the group. These officers are a president, a secretary and possibly a treasurer. The group can hold a semi-formal ballot to decide the officers, Alternatively, the subject can be discussed until a consensus is reached. Discussion seems a more natural method than a ballot for most groups. The president is responsible for the general organisation of the group, and she represents the group in their dealings with the project and others. The woman chosen as president will probably be a respected maternal figure. It does not matter if she is illiterate. .. FIG. 24 In the first year, leucaena, pigeon peas and maize are sown in the forage plot (Fig. 24). The maize cobs are harvested early while the grains are still pasty, so that the leaves and stems are good food for sheep. The pigeon pea plants are cut back twice before they produce seeds, and the branches are fed to the sheep. Pigeon peas plants grow for two years, so the branches are cut for two years. The leucaena is cut only a little in the first year. Then it is cut at a height of 1. 5 mn for many years. Each year a new forage plot is made. Only one fence is needed. This is put around the new forage plot. The older plots do not need a fence because the pigeon peas and leucaena are too tall for sheep and goats to reach. (If there were a lot of cattle, buffaloes or large game animals, a fence would be needed.) (Fig. 25, p. 37) .. The secretary must be able to read and write the national language. If none of the women in the group are literate another person must be brought into the group to act as secretary. There were two groups in northern Togo where the secretary was not a full member of the group: one secretary was a teenage son of one of the members and the other was a young woman from a neighbouring village. The treasurer keeps a written record of all the monetary transactions of the group, and looks after any money belonging to the group. Some groups operate without a treasurer because the secretary looks after the finances. WORK ROTA The women have a rota for the routine work with the flock. This rota is usually not written down, but has been discussed by the whole group so they all know what their duties are. A typical rota would be for a different two women to work each day. In the morning they both go to the night enclosure and look at the sheep. One woman takes out the flock for grazing. The other empties and cleans the water and food troughs, sweeps the ground, cuts browse and does any other necessary tasks such as replacing the mineral block. She then joins her companion shepherding the flock. Both women stay with the sheep until midday when they bring them back to the night enclosure. After a period with their families, the women take the flock out again until late afternoon. Women often prefer to work in pairs rather than alone. If the total number of women in the group is an odd number, the rota includes one trio as well as the pairs. They say working in pairs prevents them being lonely, and it is necessary where it is not socially acceptable for women to go out alone. The women can take a baby or young child with them while they are shepherding the flock. The other household duties of the village women include fetching water and cooking. On the days when the women shepherd the flock they can do only a limited amount of this work, and some of their duties must be done by other female members of the family. In some groups, two women work with the sheep for it whole week, then another two take over. In other groups, the whole group meets every morning to clean the night enclosure. 36 .. THIRD YEAR 1. Night enclosure 2. First plot (only leucaena) 3. Second plot (leucaena and pigeon peas) 4. New plot 5. Fence i'Leucaena, pigeon peas nd maize FIG.25 .. In one group the shepherding was always done by four children. These were between eight and twelve years old, and each was the son of one of the women. There is a potential conflict between education and shepherding. In the village where the children looked after the sheep there was said to be no conflict because e en if there was no project the children would not have gone to school: before the sheep production group started they did not go to school. OWNERSHIP OF ANIMALS There are two possible ways in which the animals are owned. Either the animals belong to the group as a whole, or each individual woman owns individual sheep within the flock. If the sheep initially come from the project or some other external source, they will be owned jointly by all the members of the group. In this case the women jointly make decisions about management and selling. If, when the group is formed, the women each contribute sheep, it is likely that they will retain individual ownership of these sheep and their offspring. In this case, the women jointly make decisions about management, but selling animals or removal of animals from the flock for other purposes may be an individual decision. Individual ownership is beneficial in that it stimulates the interest of the women in the flock. Problems can arise from individual ownership of sheep within a group. The group collectively pays the project for services (supplementary feeding and veterinary costs). If an individual withdraws some or all of her sheep from the flock for whatever reason (e.g. she marries and leaves the area), then she must pay the group for the services her animals have received. Unless this is pointed out at the formation of the group it can cause problems later. The group must decide how many animals they need to sell each year to repay their contracts (initial, services and ram contracts) and whose animals they will sell for this purpose. Because so many problems can arise from individual ownership of sheep, it is strongly recommended that new groups have joint ownership of all the sheep in their flocks. .. 4. Veterinary care and breeding Sheep are drenched to kill worms and other internal parasites (Fig. 2 6). Sheep are sprayed to kill ticks, mites and flies (Fig. 2 7). FIG. 27 FIG.26 ____________________ .. SELLING ANIMALS Whether the animals are jointly owned by the group or are owned by individual women, the members of the group must collectively decide on the selling policy, i.e. when, where and to whom to sell sheep, and which ones. Sheep have to be sold to repay the contracts with the project. They may also be sold to fulfil the financial needs of the members of the group, and slaughtered for home consumption usually at festival times. When a group is newly formed, all the male lambs are sold to repay the project according to the contract. But as the flock becomes larger, there is more opportunity for home consumption and selling for financial gain. Throughout the tropics there is a high demand for sheep meat. In northern Togo the price is about $1.2 5 per kg liveweight for most of the year, except for the the month before the Islamic festival of Tabasky when it is even higher. The women sell their lambs when they are aged between six months and one year. The sales either take place in the local market or traders come to the village. These traders may resell the lambs locally or transport them to the coast. Another possibility is to sell the lambs to a farmer who operates a finishing unit where the lambs are given a good diet so that they gain weight rapidly and develop a superior carcass quality. ADVICE FOR NEW GROUPS New groups of sheep producers need considerable technical advice and financial support. In most tropical countries, government extension services aimed at sheep production are minimal, and it is probably easier for new projects to provide the support services through their own field officers rather than rely on the state services. Good communication between the project and the groups of women is essential. The field officers are the people who most frequently visit the groups, but it is important that the other more senior and specialist members of the project also take an active interest in the groups and are seen to do so. The field officers travel to the villages on motorcycles. When a new group is formed, a field officer visits frequently to help with the 40 .. Sheep are vaccinated to prevent PPR (Fig.28). Every morning the floor of the night enclosure is swept, and the food and water troughs cleaned (Fig. 29). Hoofs are trimmed at the beginning of the rainy season (Fig. 30). FIG. 28 FIG. 30 FIG. 29 *0 .. construction of the enclosure and shelter. When the animals arrive the field officer will probably visit the group daily to make sure they are looking after the animals properly. Once the flock is established, the field officer visits the group regularly every two weeks. Besides giving advice and support, he or she completes a "visit form" (page 8 7). On this form there are 10 columns each corresponding to one aspect of management. The officer inspects each of these aspects of management. If it is satisfactory, he or she records a I in the appropriate box. If not he or she puts a 0. In the example on page 8 7, the group has done everything correctly except that they have not provided a mineral block. The total score for the visit (9 in this example) is recorded in the right-hand column. Then, after ten visits, the grand total is put in the bottom right-hand box. This total can be compared with the totals for other groups. In this way the women are given an indication of their success. In the early stages of production before there is much financial reward this method of assessment gives a feeling of achievement. If an animal is sick when the field officer makes his regular visit, of if the women notice a sick animal at another time and contact the project, the veterinary officer is called in. TRAINING Most of the women in sheep production groups have not previously kept sheep, so they need to learn about basic sheep management. Even those who are familiar with traditional sheep production systems need to be told about the improved, semi-intensive system. There are several ways in which this is done: 1. lnfoi mal discussion between the project staff and the women. Typically the women might suggest to the field officer that it doesn't matter if the sheep are grazed for only a few hours each day instead of the recommended 8 hours. The field officer should then explain again the importance of good feeding, and in particular an adequate grazing period. 2. Meetings between the staff of the project and the groups of women. This is a method widely used for new groups to tell them how to look after their sheep. .. FIG. 31 One ram is needed for every 30 ewes. The project staff select the fastest-growing ram lambs and the farmers use these for breeding. Male lambs not needed for breeding are castrated (Fig. 3 1). .. 3. Visits of one group to another, i.e. learning by example. This method gives the women the most clear picture of how they should look after sheep. It is strongly recommended that all new groups visit an established group. If this is not possible, they should visit an established individual producer or an experimental fa rm. 4. Field days when several groups of women meet together. In northern Togo highly successful field days are held and prizes given out to those groups which achieve the highest scores in the management assessment ("visit forms"). The particularly good production groups can then pass on helpful hints to the others. 5. Courses of one or more days for certain members of groups, e.g. presidents or secretaries, to enable them to do their jobs more effectively. These courses (and also field days) should be arranged during the dry season when the women do not have to work in the fields. 6. In northern Togo each group has a production notebook which contains "visit form" and some simple written information and diagrams about sheep. This book remains with the group so that it is available for people to look at. 7. Diagrams drawn large and clearly, for use in meetings with the women. 8. A videotape showing aspects of production is a novelty to most villagers and will certainly stimulate interest. Probably a videotape should be used only once groups are well-established. If used too early the women may expect magical outputs from their flock. RECORDS The written records are the responsibility of the secretary and treasurer. The flock diary and the production notebook are kept in the sheep shelter so they are available for anyone to read even if the secretary is not present. They are usually wrapped in a plastic bag and placed in the roof. The inventory of concentrate food and mineral 44 .. 5. How the group operates Each group chooses a president who organises the group a secretary who keeps the flock records; and a treasurer who looks after the money and keeps accounts (Fig. 32). FIG. 32 .. blocks and the financial accounts are kept by the secretary and treasurer in their houses. 1. Flock diary. In this book all animals which are born, purchased, die, sold, eaten and lost are recorded, as shown on page 86. It is filled in each time animals enter or leave the flock. By looking at the flock diary the members of the group or the project staff can see how many animals there are in the flock, how many animals have been sold, and can calculate lambing rate or mortality rate. 2. In the inventory of supplementary food and mineral blocks the secretary records the delivery of the concentrate food (cottonseed) and the mineral blocks to the project store, and their removal for consumption by the flock (page 82). This inventory shows the group when they must request more deliveries and shows the total used each year. 3. Financial accounts record every monetary transaction of the group (page 84). 4. Production notebook. This book contains diverse information about the flock and is filled in by the staff of the project. The first twelve pages are visit forms (page 8 7). The field officer checks the standard of management and fills in one line of the visit form on each of his fortnightly visits. There are five pages of livestock inventory (page 8 1). This is where the basic details of individual animals (identification number, sex, breed, owner, presence each year, etc) are recorded. When the field officer gives a new animal an identification number, he records the details of the animal here. There is a calendar of production for seasonally breeding flocks (page 8 5). The top line is the same for all years and shows when the flock must be sprayed against ticks. Beneath this there are seven empty rows. Each year the field officer glues in the schedule for breeding showing the times the rams will be with the ewes, the lambing season, and the times for castration, vaccination and other tasks. The production notebook also contains several pages of simple information relating to the management of the flock, veterinary care etc, such as the coloured pages of this booklet and page 2 6. These .. Each day two women look after the sheep. All the sheep jointly belong to the group of women, not to individual women. The whole group of women decide when to sell or slaughter lambs and what to do with the money (Fig. 33) FIG. 33 .. pages should be modified according to the local conditions and management system. PRODUCTIVITY LEVELS Before a new technology is put forward to groups of women or other farmers, it must be tested over a period of years. A successful system must be productive. The productivity or output of the system is measured by the lambing rate of ewes, growth rate of lambs, and mortality rates of lambs and adults. The figures for the productivity of semi-intensive flocks in northern Togo given below show how productivity is calculated. These values can be used as a standard against which the productivity of other flocks is compared. Lambing rate =number of lambs born per ewe, per year =1. 58 In northern Togo most ewes have single lambs (not twins), and the lambing interval is about eight months. Mortality rate of lattbls =number of lambs which die divided by total number of lambs born = 0. 12. Weanzirg rate =number of lambs surviving to weaning per ewe, per year =lambing rate x (1 -mortality rate) =1.958 x (1-0.12) 1.40. Mortality rate of adults = number of ewes and rams which die, divided by total number of ewes and rams in flock =0.08. Culflinig rate of ewes = number of old ewes removed from the flock, divided by total number of ewes in flock =0. 12. Weight of lambs at weaning (3 months) =12 kg. Weight of ewes =22 kg. Offtake =number of animals removed from the flock for productive purposes per year, divided by the number of adult animals in the .. I The field officer from the project regularly visits the group: I -To check that the women are looking after the sheep correctly -To look at the flock records; and -To sort out any problems (Fig. 34) 4,: FIG. 34 .. flock = [1.58 x (1-0.12)] 0.08 0.12 = 1.3. It is assumed for the purposes of this calculation that flock size is constant. Productivity index = weight of lamb weaned divided by ewe weight, per year = 1. 58 x (1-0. 12) x 12/22 = 0.75. It is common for many of the above measures of productivity which include the word "rate", and also offtake, to be expressed as percentages. These percentages are easy to calculate; simply multiply the appropriate rate by 100. For example, if the lambing rate is 1.58, the lambing percentage is 15 8 %. The values given above were obtained in semi-intensive systems in the sub-humid zone of West Africa. The productivity of flocks with unimproved management or those in less favourable areas is bound to be lower. For traditional systems in West Africa the productivity index is usually between 0.4 and 0. 5. ECONOMICS The initial analysis is based on the assumption that the flock has reached its final size of 75 ewes. Output The total value of animals sold per year is about $2 5 3 5.09 calculated as follows: Animals sold Number Weight Price Value kg Slkg S Male lambs 52 22 1.25 1430.00 Female lambs 37 20 1.25 925.00 Culled ewes 9 23 0.87 180.09 Running costs (calculated per ewe in the flock) .. 6. Training and records The field officer from the project shows the women how to look after their sheep. The women visit other groups of sheep producers. The president and secretary go on a training course. FIG. 35 .. Cost (S) Veterinary products (75 ewes x $1.25/ewe) Mineral blocks (75 ewes x 2 kg/ewe x $0.435/kg) Food supplements (75 ewes x 36 kg/ewe x $0.03 3/kg) Maintenance of equipment Total Net return (i.e. animals sold running costs) = 2 535.09 $ 2 186.09, or $ 29 per ewe. Item Ewes (75 x $ 30) Equipment fence six food troughs three water troughs forage rack Total equipment TOTAL Rate of return on capital invested $2 186.09/$2 521.00 = 87% This is a very favourable rate of return. The above analysis has not considered the cost of labour. If labour is included in the analysis, then Labour cost = 2 x 365 days per year at $ 1.00 per day = S 730.00. Profit = 2 186.09-730.00 $ 1 456.09 and rate of return = 1456.09/2 521.00 = 58%. 93.75 65.25 90.00 100.00 349.00 - 349.00 Cost (S) 2250.00 97.00 78.00 66.00 30.00 271.00 2521.00 Item .. The secretary writes in the Flock Diary every time an animal is born, bought, dies, is eaten or lost (Fig. 3 5, p. 5 1). The secretary keeps a record of supplementary food and mineral blocks. The treasurer records every monetary transaction of the group. The field officer visits the group every two weeks. He checks that the management of the flock is good, and records it on the visit form. 7. Productivity and economics A flock of 75 ewes usually produce each year: 52 male lambs worth $1430 37 female lambs worth $ 925 9 cull ewes worth $ 180 Total sales = $2 535 The costs for a flock of 75 ewes are: Veterinary products $ 94 Mineral blocks $ 65 Food supplements $ 90 Maintenance of equipment $ 100 Total costs = $ 349 Profit = sales costs .. INCREASE IN SIZE OF FLOCK The women's groups each start with a small number of ewes and build up to a larger flock. The suggested maximum is 75 ewes. This increase is possible provided that the number of female lambs weaned is greater than the number of ewes which die or are culled. The most reliable way to find out how rapidly flock size will increase is to write down when the ewes are expected to lamb and when their female offspring will give birth. Table 8.1 shows this method used with the productivity data given on page 56 and assuming that all female lambs are kept in the flock. In this case the number of ewes is 30 in the first year, 37 in the second year, 40 in the third year, 56 in the fourth year, 74 in the 5th year, and only in the 6th year of production is the target of 75 ewes reached. If the lambing rate is lower or the mortality rate higher than in this example, the flock will grow more slowly. The same will be true if not all the female lambs are kept in the flock. Also if a group starts with fewer than 30 ewes, it will take longer for them to reach a flock size of 75. ECONOMICS OF PRODUCTION IN THE FIRST YEARS When a new group starts sheep production, they find that in the first few years there is little economic return from the flock as they have only a small number of ewes, are keeping all the female lambs to build up the flock, and have to use the money from the sale of male lambs to repay their contracts. First year Repayment of initial contract (one-quarter of total contract value of $ 363, less 25% subsidy) = $ 68.00. Payment of service contract for first year ($ 37.50 for veterinary care, $ 2 6. 10 for salt blocks, $ 3 6.00 for cottonseed) = $ 9 9.60. Maintenance of equipment = S 100.00. .. = $2 535 349 = $2 186 But in the beginning, a group does not have 75 ewes If there are 30 ewes in the 1st year, there will be: 3 7 ewes in the 2nd year, 40 ewes in the 3rd year, 56 ewes in the 4th year, 74 ewes in the 5th year, and 75 ewes in the 6th year. The 1st year is difficult for a new group because: -There are not yet any lambs ready for sale, -But the group has to pay for food, veterinary products and maintenance of equipment. The project must give a loan for the I st year. In the 2nd year the profit is $ 367. The profit gets bigger each year until the 6th year when it is $2186. .. TABLE 8.1. Calculation of increase in number of ewes in flock No. ewes lambing Female lambs Total Month no. of New "Old" Total No. Will lamb ewes surviving in month no. 2 3 4 5 6 30.1 3(0.0 12.9 24 7 8 9 11 1II 12 30 1 3 14 24.0) 24.11 1(13 32 1"5 1) 17 18 19 20 21 22 24.0 24.0 10.3 40 23 24 12.9 12.9 5.5 42 37 25 26 27 28 29 30 19.2 19.2 8.3 48 31 32 10.3 10.3 20.6 8.9 50 33 34 35 36 40 'Calculated as weaning rate/2 x lambing interval x adult survival rate (1-0.08) = 0.43 56 1.4/2x 8/ 12x .. TABLE 8.1. Calculation of increase in number of ewes in flock No. ewes lambing Female lambs Total Month no. of New "Old" Total No. Will lamb ewes surviving in month no. 37 38 15.4 15.4 6.6 56 39 40 10.3 16.5 26.8 1 1.5 58 41 42 5.5 5.5 2.4 60 43 44 45 46 15.4 15.4 6.6 64 47 48 8.3 26.8 35.1 15.1 66 56 49 5b 8.9 4.4 13.3 5.7 68 51 52 53 54 12.3 12.3 5.3 72 55 56 6.6 28.1 34.7 14.9 74 57 58 11.5 13.3 24.8 10.7 76 59 60 2.4 2.4 1.0 78 74 61 62 9.8 9.8 4.2 80 63 64 6.6 27.7 34.3 14.8 82 65 66 15.1 19.9 35.0 15.0 84 67 68 5.7 1.9 7.6 3.3 86 69 70 9.8 9.8 4.2 88 71 72 5.3 34.3 39.6 17.0 90 92 .. Total payments (S 68.00 + $ 99.60 + $ 100.00) = S 267.60. Net return 267.60. Second year 30 ewes produce (in first year) and 21 male lambs for sale sold at a weight of 22 kg at a price of $ 1.2 5 per kg which give an income of $ 577.50 4 cull ewes for sale sold at a weight of 2 1 kg at a price of $ 0.8 7 per kg which give an income of $ 80.04. Total income from flock = S 5 7 7.5 0 + $ 8 0.04 = $ 6 5 7.5 4. Repayment of initial contract (one-quarter of total contract value of $ 363.00, less 25% subsidy) = $ 68.00. Payment of service contract for second year ($ 4 6.2 5 for veterinary care, S 32.19 for salt blocks, $ 44.40 for cottonseed) S 122.84. Maintenance of equipment $ 100.00. Total payments ($ 68.00 + 122.84 + $ 100.00) $ 290.84. Net return ($ 657.54 -,$ 290.84) = $ 366.70. Third and subsequent years The net return in the third and subsequent years is calculated in a similar way to that for the second year. The number of ewes in the flock increases each year until the target of. 75 is reached in the sixth year, so that the service contract increases until the sixth year, and the total income of the flock increases until the seventh year. The repayment of the initial contract is a constant value for the first four .. years, and then is zero. The maintenance of equipment is constant for each year. These values are summarised inTable 8.2. The 25% subsidy has been taken into account only in the repayment of the initial contract. All other values are gross. TABLE 8.2. Economic performance of the flock during the first years Year 1 2 3 4 5 6 7 and subsequent Number of ewes 30 37 40 56 74 75 75 Animals sold male lambs 0 21 26 28 39 52 52 female lambs 0 0 0 0 0 36 37 cull ewes 0 4 4 5 7 9 9 Total income ($) 0 658 795 870 1 213 2510 2535 Repayment of 68 68 68 68 0 0 initial contract (S) Payment of 100 123 136 186 246 249 249 service contract ($) Maintenance of 100 100 100 100 100 100 100 equipment ($) Total payments 268 291 304 354 346 349 349 Net return ($) -268 367 491 516 867 2 161 2 186 Table 8.2 shows that the first year is financially very hard for the groups. They are unlikely to have any income from their flock, yet they are expected to pay a total of $ 268. The project has to adopt a sympathetic approach to the repayment of loans, perhaps even to schedule the repayment installments so that no money is paid in the first year. The timing of the sale of the first lambs is critical, and the distribution to the group of pregnant ewes or ewes with lambs at foot may be worth considering. An alternative solution is to encourage the group to grow vegetables or undertake another activity which will give an income in the first year but which will require very little capital input. .. LOANS AND SUBSIDIES Loans and subsidies are given to make semi-intensive sheep production more financially attractive, and so to encourage groups of women to start keeping sheep. In particular, sheep production needs investment of capital for animals and fixed equipment. Peasant women possess little or no money, so that unless they are given a loan, they are unable to start production. The Project of North Togo gives a subsidy of 2 5% for all equipment and services provided. This means that if a group receives cottonseed worth $ 10, they pay only $ 7.5 0. A loan is given to each new group to help them buy essential equipment. Only 75% of this loan is paid back. As specified in the initial contract (page 65), one-quarte*r of the amount due is paid immediately. Three further installments of the same amount are paid back after one, two and three years. No interest is paid on these loans. Sheep may also be provided as a loan. For each ewe given to the group, they must pay back one young healthy ewe within five years. The money due to the project for services (supplementary food, veterinary care, etc.) must be paid at the end of each year. 8. Problems and solutions The potential problems of sheep production by groups of women are innumerable but, as the author saw in the Project of North Togo, with care and commonsense most of these problems can be avoided. The following list is based on what the women in the groups and the staff of the Project of North Togo consider would be problems if the appropriate action were not taken, as well as problems that have arisen in similar circumstances elsewhere. Almost all these problems would arise also with men's groups for sheep production. Only item 15 (support of husbands) is exclusive to women. Two further items. the support of the village leaders (item 14) and literacy (item 20) are likely to be more serious problems for women than for men. .. 1. The women have no capital so they are unable to start a group unless they are given a loan to buy equipment and supplementary food and veterinary care for the first year. Some women are able to provide their own ewes; others are not. 2. Many village women do not know how to look after sheep. The groups must receive advice and training from the field staff. 3. It is difficult to herd a new flock. For the first few days it will be necessary for more than two women to herd the flock until the sheep recognise each other and know where their night enclosure is. 4. There is a potential conflict of labour between the needs of the group and the needs of the family (carrying water, looking after children, working in the fields, etc). In the Project of North Togo the water supplies to the villages were improved so that the women no longer have to carry water from up to 15 km away. They are able to use for sheep production some of the time they previously spent carrying water. 5. The work with the flock may be inadequately performed because the group as a whole do not realise the value of cleaning the water troughs, shepherding the flock for at least 8 hours, etc. The field officer must give adequate training to the women and check that the basic tasks are being performed adequately. 6. Some individual women are lazy and do not do their share of the work. It is necessary for these women to know why their work is important to the success of the group, and to be given pressure to work harder by the rest of the group and particularly by the president. 7. Some members live too far away to fully participate in the activities of the group. All members must live near one another. 8. All the members of the group must feel involved with the group and must care about its success. 9. In the early stages the women feel that they are doing a lot of work and getting little reward. The loans must be arranged so that the group gets some financial benefit in the first two years. .. 10. In groups where the women each provide ewes when the group is formed, there can be problems in later years if the women have initially provided different numbers of ewes. These problems relate to how the profits of the group are distributed and whose lambs are sold to repay the contract. The solution is either to insist that all the women provide the same number of ewes, or that the use of the lambs is discussed and understood by all the members when the group is formed. 11. Occasionally a woman leaves the group because she marries, leaves her husband or her family moves away. How many sheep is she entitled to take with her? [low much should she pay the group for the services her sheep have received? How much should she receive for the work she has done with the communal flock' These questions must be resolved by discussion within the group. 12. A woman who has initially provided ewes wants to remove one or more for her own purposes. This action must have the approval of the whole group, and the woman must pay the group for the services her sheep have received. 13. If some or all of the ewes, or the land for the night enclosure, have been provided by an outside body (e.g. church, philanthropic neighbour) but there is no written contract, misunderstandings can arise regarding how much the group should pay back. It is advisable to have a written contract between the group of women and any outside body with interests in their sheep flock. 14. The village leaders control the activities in the village, and unless they support the women's group, the group will find it difficult to operate. For instance, land is traditionally allocated to men who are regarded as the head of the family and the women have no right to demand land for their sheep. The women must secure the support of the village leaders. 15. The women must have the support of their husbands who should understand the duties of the women and the benefits of sheep production. 16. The production system must be a technology which has been tried and tested and shown to work satisfactorily in the area. .. 17. There must be technical solutions to potential disease problems. 18. There must be a reliable supply of the necessary inputs such as mineral blocks, supplementary food and veterinary drugs. 19. The field officers employed in the project must be well-motivated and capable of dealing with potential problems. 20. There may be no woman in the group who can read and write sufficiently well to keep the records for the group. If this is so it is necessary for an outsider to keep the records, and train one of the women so she is able to do so. 2 1. The record books may not be filled in because the secretary does not realise their -importance. The secretary, in particular, needs repeated training. 22. No donor agency can be expected to support a development project indefinitely. Therefore the long-term aim must be to make the groups able to survive on their own when the project finishes. Once they are able to operate without a financial subsidy, they will still need advice and veterinary care. With government backing, sheep production groups can be serviced by staff of the national extension service. Assumptions. Culling plus death rate of ewes is 0.2, i.e. survival from one year to the next is 0.8. Lambing interval is 8 months. At each lambing, a ewe produces on average 0.43 female lambs that will survive to their first lambing (see footnote to table). First lambing at age 18 months. Initial ewes lamb in 6th month of first year. .. .. APPENDIX 1: CONTRACTS Initial contract for equipment C o n tract n u m b er G ro u p n u m ber Between 1. T h e G rou p of . D istrict of . hereafter called the "Group", and 2. The Project of North Togo, hereafter called the "Project", represented b y . 1. COMMITMENTS OF THE PROJECT Article 1. The Project will deliver at a fixed price all the equipment which is the subject of the present contract and appears in section IV. Article 2. The Project will supervise the installation of the equipment. Article 3. The Project will replace materials which are faulty. Article 4. The Project will give advice on animal production. 11. COMMITMENTS OF THE GROUP Article 5. The Group will follow the recommendations of the Project for the establishment of a night enclosure and/or cultivated forage. Article 6. The Group will pay back the total of the present contract in case of theft, loss or deterioration of materials due to their negligence. Article 7. The Group will not sell or give away the materials without the prior agreement of the Project. Article 8. The Group will follow the improvement schemes proposed and respect the technical advice of the Project. .. Article 9. The Group will sign an annual contract with the Project for services rendered. Article 10. In the case of the establishment of a forage plot, the Group will provide protection to the forage plot against other ruminants and fire. III. SPECIFIC AGREEMENTS Article 11. The equipment which is the subject of the present contract remains the property of the Project until the last annual payment has been made. In case of the improper application of the advice given, or if an annual payment is not made, or if the contract for services rendered is not accepted, the Project reserve the right to take back the aforesaid equipment. Article 12. In case of dispute, the two parties will abide by the decision of a competent authority. IV. SUBJECT OF CONTRACT Prices are in US dollars and are current for the year 19 1. Night enclosure (Note: the price does not include the construction of the shelter wfiich is the sole responsibility of the Group). Size of enclosure Component (Number of ewes) Total 30 ewes 40 ewes 50 ewes 7 5 ewes Traditional fence 53 60 63 73 . Tensioned fence NA 80 83 97 . Note: for tensioned fence. the price includes the cost of teak posts. .. 2. Compulsory equipment (according to the number of ewes present) Number of ewes Equipment 10 20 30 40 s0 75 Total N Price N Price N Price N Price N Price N Price Food troughs 1 1 3 2 26 2 26 3 39 4 52. 6 78 . Water troughs 1 22 1 22 1 22 2 44 2 44 3 66 . 3. Optional equipment Item Quantity Unit price Total Forage rack 3(0 4. Cultivated forage Forage Area Unit price Total Cultivated forage (species. GRAND) TOTAL. With the aim of encouraging new sheep production groups, the Project has decided, for the year 19 ,to give a subsidy equal to 25% of the total sum invested. The total sum due to the Project amounts to which is to be paid back as follows: Deposit: 2 5% of the total sum due to the project, namely . 1 st annual payment (Dec 19 8 2 25% of the total, namely . 2nd annual payment (Dec 198 2 25% of the total, namely . 3rd annual payment (Dec 198 )25% of the total, namely. .. Date Amount Payment Item due due Date Amount Stamp received Deposit Feb 198 . I st annual payment Dec 198 . 2nd annual payment Dec 198 . 3rd annual payment Dec 198 . Type of enclosure Length of fence . Equipment: W ater troughs . Food troughs . Forage racks . I . (Thispt qe in triplicate) INITIAL CONTRACT FOR EQUIPMENT the 19 For the Project of North Togo Signature . Nam e . Position . Group President Signature Nam e . .. ANNUAL CONTRACT FOR SERVICES RENDERED C on tract n u m ber G rou p n u m b er . Between 1. T he G roup of . D istrict of . hereafter called the "G roup", and 2. The Project of North Togo, hereafter called the "Project", represented b y . I. COMMITMENTS OF THE PROJECT Article 1. The Project will supply the Group with mineral blocks and agro-industrial by-products according to the quantities stated in the present contract. Article 2. The Project will carry out all the prophylactic treatments (vaccinations, treatment of internal and external parasites) at the times specified by the "calender of prophylaxy" and provide the normal veterinary requirements to keep the animals healthy. II. COMMITMENTS OF THE GROUP Article 3. The Group will at all times provide the animals with a mineral supplement in the form of a salt block. A fixed number of blocks will be purchased from the Project for the duration of the present contract. Article 4. The Group will buy from the Project agro-industrial by-products according to the quantity fixed in the present contract. Article 5. The Group will respect the periods of distribution of the by-products namely from August to November. Article 6. The Group will contact the staff of the Project warehouse before the end of July to arrange the distribution of their supplements. .. Article 7. The Group will not sell or give away the supplements (mineral blocks or agro-industrial by-products) without the prior agreement of the Project. Article 8. The Group will help the field officers when they visit the flock. Article 9. The Group will pay the total of the present contract at the beginning of the year or, for new Groups, when they join the Project. 111. SPECIFIC AGREEMENTS Article 10. If, for whatever cause, there is a substantial reduction in the number of ewes during the year covered by the present contract, the Project will repay the Group for the supplements not used according to the number of ewes that were and the number of months remaining in the year. Article 11. If a substantial number of ewes are introduced into the flock during the year covered by the present contract, the Project reserve the right to demand an increase in the total sum of the contract to cover the extra supplements used according to the number of sheep introduced and the number of months remaining in the year. Article 12. For new Groups which join the scheme in the first half of the year, the Project will pay up to half of the total sum of the annual contract. For new Groups which start in the second half of the year, the Project will pay up to a quarter of the total sum of the annual contract. Article 13. If the contract is not honoured by the Group at the specified time. the Project reserve the right to be paid in kind. This payment will be based on the market price current during the period. IV. SUBJECT OF CONTRACT The prices and subsidies which are in force for the year 19.are as follows: Price No of Quantity Total Total Itm Pie usiy after ewes per ewe quantity price Ite Prce ubsdy subsidy per year (1) (2), (3) (4) =(2)x(3) Veterinary 1.11I/ewe 25% 0.83 NA NA . Costs Mineral 0.44/kg 25% 0.3 3/kg 2 kg . blocks Cottonseed t).0 3/kg 25% 0.02/kg 36 kg . GRAND TOTAL. .. Total amount Date payment Payment contract due Date Stamp . Number of ewes . Commodity Quantity Deliveries delivered 2 3 4 No kg Date Quantity Date Quantity Date Quantity Date Quantity Salt . blocks Cotton- . seed (This page in triplicate) ANNUAL CONTRACT FOR SERVICES RENDEREDition . .. .. CONTRACT FOR SALE OF SELECTED RAMS C on tract n u m ber G rou p n u m ber Between 1. The G roup of . D istrict Of hereafter called the "Group",and 2. The Project of North Togo, hereafter called the "Project", represented b y . 1. COMMITMENTS OF THE PROJECT Article 1: The Project will sell selected rams to certain Groups. These rams will be aged less than one year, and their number proportional to the number of ewes in the flock (I ram for 30 ewes). Article 2: The Project will supply all the technical advice and treatments necessary for successful animal breeding. 11. COMMITMENTS OF THE GROUP Article 3: The Group will pay back to the Project the total sum of the present contract in the case of theft, loss or death of rams due to their negligence. Article 4: The Group will castrate all the males in the flock before the arrival of the selected males. Article 5: The Group will sell to the Project at a fixed price the best male lambs sired by the selected rams, and will castrate all other male lambs. Article 6: The Group will follow the technical advice of the Project and in particular will give food supplements (cottonseed) in the second half of the rainy season (from August to the end of November). .. Arkeli' 7: The Group will sign an annual contract with the Project for services rendered. Article 8: In order to avoid inbreeding, the Group will withdraw the selected rams from the flock after two years of use. namely in the month of 19 At that date the Group will be free to dispose of the rams as they please J sale, barter, consumption), provided that the present contract has been scrupulously respected. The Group will be able to buy new rams from the Project. Article 9: The Group will not sell, give away, lend or eat the selected rams without the prior approval of the Project. 11I. SPECIFIC AGREEMENTS Article 10): The rams which are the subject of the present contract remain the property of the Project until the last annual payment is received. If the rams are not used as recommended or if an annual payment is not made or if the annual contract for services rendered is not accepted, the Project reserve the right to take back the rams. Article 11: If a selected ram dies, the Group wvill call in the animal production field assistant who will conduct a post-mortem to identify the cause of death. IV. SUBJECT OF CONTRACT The present contract concerns the following animals: Ram No Liveweight Price/kg Price of ram Remarks ~ 1.17. . ~ 1.17. . ~ 1.17. . Total 1.17. The total sum due to the Project from the Group is which is to be payed back in the following way: First year (Dec 1 9 ): 50% of the total sum due to the Project namely . Second year (Dec 1 9 ): 50% of the total sum due to the Project namely. .. (This page in triplicate) CONTRACT FOR SALE OF SELECTED RAMS C on tract N o N am e of G roup V illage . D istrict Total am ount of present contract Having read and understood the articles above, the undersigned agree to respect the terms of the present contract. Group President Signature . Nam e . Written in triplicate and with good faith at . on the 19 For the Project of North Togo Signature . Nam e . Position . Date Amount Payment Repaymedue due Date Amount Stamp received 1 st annual payment Dec 19 . 2nd annual payment Dec 19 . R am N o W eigh t (kg) . .. .. CONTRACT FOR LOAN OF BREEDING EWES C on tract n u m ber G ro u p n u m ber Between 1. T h e G rou p of . D istrict of hereafter called the "Group",and 2. The Project of North Togo, hereafter called the "Project", represented b y . 1. COMMITMENTS OF THE PROJECT Article 1: With the aim of improving the immediate profitability of new groups, the Project will lend young breeding ewes which have been given the necessary prophylactics. Article 2: The Project will replace ewes which die within two months of distribution, provided that the deaths are not caused by the the Group. Article 3: The Project will give advice on sheep production. 11. COMMITMENTS OF THE GROUP Article 4: The Group will pay back to the Project the total value of the breeding ewes in the case of theft, loss or death of ewes due to their negligence. Article 5: The Group will not sell or give away the ewes before the total has been repaid. Article 6: The Group will follow the technical advice of the Project. Article 7: The Group will sign an annual contract with the Project for services rendered. .. 111. SPECIFIC AGREEMENTS Article 8: If an annual payment is not made, the Project reserve the right to take back the ewes. IV. SUBJECT OF CONTRACT Article 9: The number of young ewes lent to the Group is and the total value of these anim als is A article 10: The Group will pay back the same number of young ewes within a period of four years, starting after one year. The repayment will be in the form of ewe lambs aged between 6 months and one year, with a minimum weight of 18 kg. Each year one-quarter of the total number of animals will be repaid. I . ewe lambs due in Dec 19 2 . ewe lambs due in Dec 19 3 . ewe lambs due in Dec 19 4 . ewe lambs due in Dec 19 (This page in triplicate) CONTRACT FOR LOAN OF BREEDING EWES C o n tract N o N am e of G roup V illage . D istrict Total am ount of present contract Having rea ditio n . .. Date No. Payment Repayment due ewes Date Number Stamp received I st annual payment Dec 19 2nd annual payment Dec 19 3rd annual payment Dec 19 4th annual payment Dec 19 I .. .. APPENDIX II: Recording forms LIVESTOCK INVENTORY GdeouP INh'IOAN PIeESENT (dote) /QLMA/ No. !SEX aW, OWNEP- I IAR< -1--I -~ I I I I I I I I I -'--4 -F-V-~-- I I I I I I I I I I I I I I .. INVENTORY G roup . Cottonseed Mineral Blocks DATE IN JtOUT SO]C1 ATFE I[IN IOUT ,'7ock ---1 --- - A -I 4 I--------~-I *--~1~~ r- . J. I ______ A- _______ A _______ A .. TEETH DESCRIPTION AGE MILK TEETH, NOT WORN 0 4 MONTHS MILK TEETH, WORN 4 14 MONTHS I PAIR PERMANENT INCISORS 14 20 MONTHS 2 PAIRS PERMANENT INCISORS 20 26 MONTHS 3 PAIRS PERMANENT INCISORS 2 3 YEARS 4 PAIRS PERMANENT INCISORS NOT WORN 3 5 YEARS FULL MOUTH, I PAIR INCISORS WORN 5 -6 YEARS FULL MOUTH, 3 PAIRS INCISORS WORN 6 7 YEARS ALL INCISORS WORN MORE THAN 7 YEARS of age of sheep Determination .. 00 FINANCIAL ACCOUNTS [DATE TRANSACTION .. ,r~,,, I CALENDAR OF PRODUCTION IJ /VO. Year fwS Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Sprain N A A I A I I IA I I AI AAA ~_.- ,9.DS VP L. / . T1. RL. 2. 077 So"o 3EE-b-_lk lln SYMBOLS: VP- VACCINATION PPR ,L)A.- DRENCH ADULTS A SPRAY T.- TRIM HOOVES 0 SUPPLEMENTARY FOOD .L .- DRENCH LAMBS .- SELECTION & CASTRATION [] MATING E LAMBING .. 00 FLOCK DIARY GROUP -------------------- IN OUT NO A T A Boe_ Z)A TE IN OUTSTART 80,B/V _OUrT Dl-rb SOLb COoM WM-,b LOST TION FROM To bATE No. N0 6Wr E, LA MS, No. No. No. 1 E5 No. AG E NO. GE AL. ixkC -- x x f- x BA- EWE __________M EX N -ofN NOUG#T IS AIULIS 6O1D COIVSUHFb LOST 8AL. 480T No. AT EN) ION- t TOTAL .. VISIT FORM G~eouP-----------. VISITS ENCLO5UIRE TROUGHS SUPPLY OF. STATE cooPTOTAL Nometond b a ____F e _- S Muool Concentr- OF E ATIO FOR Position n. ne e o w ter oaes SHE WEER / i1 i i -t It I L 1 4 I -1 r r k'. adeke., 28//8,7 / FOipi cer 9.3o 0 I / / TO TA L z / to PW'id to .. .. BIBLIOGRAPHY ABELA, M.T. (1979) Integration des femmes dans la developpement rural (rapport de consultation). [Integration of women in rural development (report of consultation)]. Food and Agriculture Organization of the United Nations, Rome, Italy. ATOR, P. (1986) Working with rural women's groups (general suggestions for group organizers, project managers and field workers). Food and Agriculture Organization of the United Nations, Rome, Italy. AWADE, M.: KATAMINA, K. (1984) Rapport sur la visite des families d'eleveurs de moutons encadrees par le Projet Nord-Togo (zone de Helota du 28 au 29 Novembre 1984). [Report of a visit to families of sheep producers in the North Togo Project (Helota Zone, November 1984).] Ministere de la Sante Publique des Affaires Sociales et de la Condition Feminine, Lome, Togo. BVA (1976) Handbook on animal diseases in the tropics. 3rd edn. British Veterinary Association, 7 Mansfield Street, London, U.K. GATENBY. R.M. (1986) Sheep production in the tropics and sub-tropics. Longman, London, U.K. GOHL, B. (1981) Tropical feeds: feed information summaries and nutritive values. Food and Agriculture Organization of the United Nations, Rome, Italy. ILCA (1979) Livestock production in the subhumid zone of West Africa: a regional review. Systems Study 2. International Livestock Centre for Africa, P.O. Box 5689, Addis Ababa, Ethiopia. PROJET NORD TOGO (1986) Projet de creation de 2 groupements cooperatifs feminins d'elevage ovin a Niamtougou region de la Kara. [The formation of two women's cooperative groups for sheep production in Niamtougou, Kara Region.] Projet Nord-Togo, B.P. 20, Kara, Togo. SAFILIOS-ROTHSCHILD, C. (1983) Women in sheep and goat production and marketing. Paper presented at the expert consultation on women in food production. Rome, December 1983. Food and Agriculture Organization of the United Nations, Rome. Italy. VAN VLAENDEREN, G. (1984) Etude comparative de differents systems d'elevage ovin dans la region de la Kara. [Comparative study of different systems of sheep husbandry in Kara Region.] Projet Nord-Togo, B.P. 20. Kara, Togo. VAN VLAENDEREN, G. (1984) Resultats economiques de differ nts systemes d'elevage ovin. [Economic results of different sheep husbandry systems.] Projet Nord-Togo, B.P. 20, Kara, Togo. VAN VLAENDEREN, G. (1985) Northern Togo sheep husbandry development programme. World Animal Review 3:19-26. .. VAN VLAENDEREN, C. (1986) Les parcelles fourrageres on banques de fourrages. [Forage plots or fodder banks.] Projet Nord-Togo, B.P. 20, Kara, Togo. VAN VLAENDEREN, G.: NOIJKOUM. Y. (1986) Lelevage des petits ruminants dans la region Kara: rapport annuel. [Husbandry of small ruminants in Kara Region: annual report.] Projet Nord-Togo, B.P. 20, Kara, Togo. VANI)EWEERD, L. (1985) L'activite elevage-ovin dans les groupements cooperatifs de la region de Kara: situation en Novembre 85 et perspectives de developpement. [Sheep husbandry in cooperative groups in Kara Region: the situation in November 1985 and perspectives of development.] Ministere du Developpement Rural, Kara, Togo. .. ion, o' i IIIA -'nom I,, ir -I'I '0' 'anm"I ;J, N 'Inr':_ V, 'fn I o I oT W IM"AFF 0-Al noon"a", n, o-tmn An of JQ Jo, nq oeTJQj"A-,w inli ao m om 'I % no Ali, talc "'nii "i"so n i o n, 'M vp 7o m, no os n n T 'not o 4 vl 4 1 a o no, 4, VA !,I, I a mot, 4' oo A, am a a p-w QU o 'Ioti 1; o", 4a tv -n Vo , o i, I v ooa- .1-, ;o "! A Q ", ", 'long Ia "on -1 la- , 1, ,, ,,a 4% n to, 'n AIX 'n':o:-%ooIm I , ":""o 0 0 wwto, sum as Mon to In 'Styn, nto "i tov n n "MEWK PA 41"'o-3; on I: alm An" q to I of In m o- f4l."4 a no 1, i-c v, to jo ;4" on, v o, no a on";o 'o 11 1, ""i no -,f7 ota any Q I Ry a", nI I Y "Q' v ST M) AK Jt, )-j, no imt,--, n alfomn if, _fQ I iA, "I I -&n a n o' on no n, an, fo,-'n, a, Vto-I'msaw way F" c W l -oam, *n PT, Q+ -WP Wo' !Vol. PW+wo-l a-ga, ,yew, 10 11, to 'momn 4"" 1$rn "a V Y na-, oof-1- '44'' nol , "o, -n'n, o"Y n, tonam, , ot -,x 4, ?,oioo I on I A, ni, fiond I a I n JEW ""of I I I OW N an- 00 1 's f, , ,A- no oo 7" 2111, 21 fl, of A t, ,to T. n n a 0f In, o n or "I 'n, I, o, nio, In- 1,o, tf;t-, To on, n na :o ta, I'aa I II+ gi oololl M, Io- V Il""", A c you 'ln, ..
https://ufdc.ufl.edu/UF00084648/00001
CC-MAIN-2020-50
refinedweb
17,576
71.65
Contents - Creating a Versioned Struct Wrapper For a GLIBC Function - Problems with static linking and structure size changes - Creating Safe Interfaces For Changing Structure Definitions - How To Define the Compat Function - Add A Versions File Entry for the Versioned Symbol - Define the Structure Version Numbers - Extend The Structure Definition - Grow the Structure - Guard The Structure Definition - Use Version Named Definitions - Shortcomings - Explanation - Further Implications - Resources - Credits Creating a Versioned Struct Wrapper For a GLIBC Function In order to accommodate a new feature or ABI change it may be necessary to change the size of a struct in GLIBC. When a struct is used in a published interface simply changing its size is dangerous because doing so would break any existing application that relies on the previous struct size. Changing a published interface requires providing two or more versions of a function using the technique of symbol versioning, where older compatibility interfaces are provided via versioned symbols. The new interface is not versioned, and it becomes the new default. Unfortunately, when the new interface change involves a structure size change, using only symbol versioning to provide symbols for the old interface does not guarantee full compatibility since the static archive of GLIBC (libc.a) doesn't provide symbol versioning. Statically linking against libc.a always results in linking to the latest interface for a function, i.e. the versioned interfaces may be there but dynamic loader symbol version matching will not find them since the static linker resolves all symbols at link time. Problems with static linking and structure size changes Consider the following example: Our system has GLIBC version 2.9 installed. In libc.so and libc.a there is a function foo() defined in foo.c: /* File 'foo.c' */ #include "bar.h" export foo (struct bar *b, ...) { (...) } Function foo() take a pointer to struct bar as a parameter and struct bar is defined in bar.h: /* File 'bar.h' */ struct bar /* Version 1 */ { (elements for Version 1) } We have created our own static library archive, app.a, that has components built against GLIBC 2.9: This archive is made up of app.oS and libc_nonshared.a /* Application 'app.c' */ #include "bar.h" int main() { struct bar b; struct fiz f; foo(&b); / return 0; } Note: For the sake of demonstration app.oS actually contains the code necessary for an executable, i.e. main(). app.oS was created using: gcc app.c -c -o app.oS - The archive was created using: ar cr app.a app.oS /usr/lib/libc_nonshared.a This is linked into an application on our system against libc.a GLIBC 2.9 using the following: gcc -static -lc app.c -o app - The following figure demonstrates static linking without a version wrapper: Let's say that for GLIBC 2.10 we extend the definition of struct bar: /* File 'bar.h' */ struct bar /* Version 2.0 */ { (elements for Version 1) (new elements for Version 2) } - The following figure demonstrates static linking with a version wrapper: If the app.a which was built against GLIBC 2.9 is now linked against GLIBC 2.10 libc.a we will encounter data corruption on application execution as explained in the following section. Types of Structure Changes and Related Data Corruption Calling 'foo()' and passing a pointer to a version 1 'struct bar', allocated by the application, to libc.a which understands version 2 'struct bar' will be problematic given any of the following structure changes: If 'struct bar' grows: libc.a will write off the end of the allocated space for structure version 1.0. If 'struct bar' shrinks: libc.a will only fill in the number of fields it feels necessary which may not be the number of fields necessary for the application to operation successfully. If the fields in 'struct bar' have moved: libc.a will populate the structure in the incorrect way. There are alignment issues, field ordering issues, field size issues, etc. The application will get back a completely corrupted structure. Note: Try not to move fields anyway. Some applications, and GLIBC itself, make use of pre-computed field offsets or even access the first field of a structure via the structure pointer itself. This entire problem of a static library that understands version 1 'struct bar' being statically linked against a libc.a that understands version 2 'struct bar' may be remedied by an operating system providing static compat archives for libc.a that are built against version 1 'struct bar', but this is not an acceptable general solution because the introduction of multiple interface changes could introduce an explosion of compatibility libraries. Creating Safe Interfaces For Changing Structure Definitions - In order to prevent the problems mentioned earlier without creating compatibility libraries, we need to: Provide compat versions of foo() built against the old structure size which calls __xfoo(_BAR_VERSION_1). Update Versions file to provide a versioned interface for foo(). Define the version numbers for the struct bar versions we need to provide. Provide a new non-versioned implementation of foo() which simply calls '__xfoo(_BAR_VERSION)' with the default _BAR_VERSION. This version exists in libc_nonshared.a and is statically linked into all applications, including those which are dynamically and statically linked - Extend the structure definition. Modify struct bar in a variety of ways. Provide a wrapper function for 'foo()' called '__xfoo()' which takes _BAR_VERSION_N as a first parameter and implements 'foo()' based upon the version indicated by the parameter. How To Define the Compat Function First, we must make sure that applications which use the old struct sizes still work when linking dynamically against libc.so. Create a compatibility function '__dyn_foo_2_9()' and tell the build system that this is a compat symbol for 'foo()' in the file 'old_foo.c'. For our example we'll assume that GLIBC_2_10 is the next version of GLIBC, and GLIBC_2_9 is where we're introducing our new 'struct bar' change. /* File 'old_foo.c' */ #include "bar.h" #if SHLIB_COMPAT (libc, GLIBC_2_9, GLIBC_2_10) int attribute_compat_text_section __dyn_foo_2_9 (struct bar *b) { return __xfoo(_BAR_VERSION_1,b); } compat_symbol (libc, __dyn_foo_2_9, foo, GLIBC_2_9); #endif Make sure to add this file to the Makefile so it gets built: routines += old_foo Add A Versions File Entry for the Versioned Symbol Add the symbol 'foo' to the Versions file for export under a specific version. In our case, since we want an exported versioned symbol for 2.9 we need to add that to the GLIBC_2.9 Versions definition: libc { GLIBC_2.9 { foo } } Define the Structure Version Numbers Introduce struct version definitions onto the bottom of the 'bar.h' header file. /* File 'bar.h' */ ... #define _BAR_VERSION_1 1 #define _BAR_VERSION_2 2 #define _BAR_VERSION _BAR_VERSION_2 Color2(red:Once defined, these definitions can-not be changed.) These are used to tell the as-yet-undescribed wrapper function '__xfoo()', which version of the function it is to execute. Notice that _BAR_VERSION is defined as the latest _BAR_VERSION_N. This is important, because this means that the default is always the newest interface. When you add a new version 'N' simply add a new definition and increment the default: #define _BAR_VERSION_3 3 #define _BAR_VERSION _BAR_VERSION_3 Extend The Structure Definition There are three possibilities for structure changes and we'll address all three. The version 2 of 'struct bar' has: fewer elements than version 1. a different element order than version 1. more elements than version 1. This gives us three ways to extend the structure: - Grow The Structure : Applicable to struct change 3. - Guard The Structure Definition: Applicable to struct changes 1. and 2. - Use Version Named Definitions: Applicable to struct changes 1. and 2. In the latter two cases where there need to be multiple definitions (whether via version named structs or definition guards) one can not directly inline the implementation in the '__xfoo()' wrapper function, since one can only have a single structure definition in a compiled module. For the later two, the choice of which method to utilize comes down to how you choose to implement the functions which operate upon 'struct bar'. Grow the Structure - This is the easy case. We can simply extend 'struct bar' and don't need any special version guards or version named structures. For compatibility case, we simply end up allocating more space than necessary for the structure. Define The Structure - New elements must always follow old elements, so simply add the new elements to the definition. /* File 'bar.h' */ struct bar { (elements for version 1) (new elements for version 2) } Implement The Wrapper Function The purpose of the wrapper function is to query the first parameter which is the struct version number. It then executes the correct implementation of function 'foo()' based upon the version requested. A simple definition of the wrapper function '__xfoo()' in the case where 'struct bar' is extended looks something like the following: /* File 'xfoo.c' */ #include "bar.h" export __xfoo (int n, struct bar *b, ...) { switch (n) { case _BAR_VERSION_1: (inline implementation for 'struct bar' version 1) case _BAR_VERSION_2: (inline implementation for 'struct bar' taking into account the new elements for Version 2) } } When the structure grows, but fields don't change position, there's no danger of writing to the wrong field, so in the _BAR_VERSION_1 case we simply ignore the extra fields. We need to update the Makefile so that xfoo is compiled: # Relevant Makefile routines += xfoo. Guard The Structure Definition Define The Structure - When 'bar.h' is #included by a .c file it will need to choose which version of the structure it wants defined: /* File 'bar.h' */ struct bar { #ifdef _USE_BAR_VERSION_1 (elements for version 1) #else (elements for version 2) #endif } ... Implement The Wrapper Function '__xfoo()' must invoke a function for each 'struct bar' version dependent implementation where said functions exist in other modules, i.e. other '.c' files. This allows different definitions of 'struct bar' per implementation. /* File 'xfoo.c' */ #include "bar.h" export __xfoo (int n, struct bar *b, ...) { switch (n) { case _BAR_VERSION_1: __foo_BAR_VERSION_1(b); case _BAR_VERSION_2: __foo_BAR_VERSION_2(b); } } /* File 'foo_v1.c' */ #define _USE_BAR_VERSION_1 1 #include "bar.h" export __foo_BAR_VERSION_1 (struct bar *b, ...) { (operate upon _USE_BAR_VERSION_1 guarded''version 1'' 'struct bar') } /* File 'foo_v2.c' */ #include "bar.h" export __foo_BAR_VERSION_2 (struct bar *b, ...) { (operate upon unguarded ''version 2'' 'struct bar') } Add the new files to the relevant Makefile: # Makefile routines += xfoo foo_v1 foo_v2. Use Version Named Definitions Define The Structure /* File 'bar.h' */ struct bar_VERSION_1 { (elements for version 1) } struct bar { (elements for version 2) } ... Implement The Wrapper Function When using version named structures the header file "bar.h" introduces new structure names for old versions of 'struct bar'. The '__xfoo()' wrapper takes a pointer to 'struct bar' as 'void *' and these are cast to a version named 'struct bar', such as 'struct bar_VERSION_1'. /* File 'xfoo.c' */ #include "bar.h" export __xfoo (int n, void *b, ...) { switch (n) { case _BAR_VERSION_1: { struct bar_VERSION_1 *bv1 = (struct bar_VERSION_1 *)b; (operate upon ''version 1'' 'struct bar_VERSION_1') } case _BAR_VERSION_2: struct bar *bv2 = (struct bar *)b; (operate upon ''version 2'' 'struct bar') } } Add the new files to the relevant Makefile: # Makefile routines += xfoo Define The New Function Stub We now create a new default implementation of the symbol 'foo()'. This new default is actually provided only by libc_nonshared.a. This is accomplished by Makefile magic: static-only-routines += foo This implementation differs from the other since it casts the 'struct bar' implementation to a 'void *'. /* File 'foo.c' */ #include "bar.h" export foo (struct bar *b, ...) { return __xfoo (_BAR_VERSION, (void *. Shortcomings This struct versioning mechanism won't help those cases where there's an existing static library, namely appv1.a, compiled against structure Version 1 and you attempt to statically link against structure Version 2 from a newer GLIBC (where the struct versioning mechanism has just been introduced). Statically linking in this scenario will link to the newest instance of 'foo()' provided by libc_nonshared.a. This is incorrect because that instance of foo() is invokes _xfoo() with a Version 2 indicator. This will cause data corruption. You can solve this by creating a special stub for foo() called foo_stub.oS which is statically linked with appv1.a into a new archive appv1_compat.a. Note: In the figure _xfoo() takes a digit rather than a #define _BAR_VERSION_N. This is to indicate that once foo_stub.oS is compiled _BAR_VERSION_N is expanded into the digit '1' and passed to _xfoo() in perpetuity. This stub looks like the new default stub for 'foo()' but it calls '_xfoo(_BAR_VERSION_1,...)' rather than the latest _BAR_VERSION_N. Finally this is linked against the new libc-nonshared.a (where _BAR_VERSION_2 is the default) and the new libc.a. Explanation Now, each version of libc_nonshared.a that corresponds to a libc.so version will contain a statically defined 'foo' function, and it will dynamically invoke '_xfoo' and pass an 'n' version number as an argument. When libc_nonshared.a is statically linked to an application, it will forever have inlined code for 'foo' that will dynamically invoke 'xfoo' with a statically defined version number 'n', guaranteeing backward compatibility. When the struct size changes again, libc_nonshared.a version of 'foo' will call '_xfoo' passing 'n+1', and so on. So, in each struct change, you need just to increment the version number that libc_nonshared.a statically defined 'foo' passes to '_xfoo', and then implement the new features under a dynamic check in '_xfoo'. Further Implications - It is naive to think that simply extending a structure is safe and doesn't require guards by considering the following scenario: struct biz { (elements) struct bar b; (elements) } If you need to version 'struct bar and you've done so by extending it, you've now endangered struct biz. You will need to provide a similar versioned structure methodology to 'struct biz' and any functions which operate upon 'struct biz'. Resources Struct_Versioning_Wrapper_Master.svg: Vector graphic master image used to generate all article figures. Credits This document was originally written by Carlos Eduardo Seo (IBM) and Ryan S. Arnold (IBM). All images were originally created by Ryan S. Arnold (IBM) using Inkscape.
http://sourceware.org/glibc/wiki/Development/Versioning_A_Structure
CC-MAIN-2014-10
refinedweb
2,322
56.66
neu populum antiqua sub relligione tueri neu populum antiqua sub relligione tueri I.) I: Alas, I am the latest Advogato diarist to get sick. I feel pretty unwell (I think I used that phrase to describe the last time I was sick, unless that was "really unwell"), and this messes up a bunch of nice plans (BayFF, CalLUG, and more). Last time, I felt sicker than this, and I got better in two days. Maybe I'll be so lucky again. I was writing a diary entry about the IBM ads and dmarti's nice work to show IBM some meaningful commitments it can make to support free software. I'm very impressed with how things are going. But I can't reach that diary entry draft because zork is unreachable from here at the moment. sangr, practically every medical professional or expert I've talked to says that caffeine is bad for repetitive strain injuries and inflamations. If I remember correctly, it reduces your circulation. If you have RSI problems, you might want to ask a doctor whether reducing your coffee intake could help. In the mountains, there you feel free At the IBM casting event (where much of interest happened), I ran into Sam Ockman, who took me out to Muir Woods, where we walked through the ancient redwood forest for several hours. That was really nice. I wrote a long letter to my sister. I bought a fire extinguisher. I went to a bookstore. BBC Between wrist pain and not understanding IPSec in the kernel, I still haven't gotten 1.5.9 out! I did post an ISO image so that interested people on the mailing list could try the most recent version, but that's very different from publishing it to the whole public. I think I should make some more enhancements and go straight to 1.5.9.2. The next "marketing" release should be 1.6 or maybe 1.6.1.8. (The "golden" BBC.) DVD Amusing: *** aaronl@pts/57 threatens people over DeCSS GPL violation at [...] sneakums@pts/5> God save me from bored teenagers. Structure and interpretation (I was actually reading SICP in the morning, but that's not what made me think of this.) I read some of ESR's sex tips (following a link from a slashdot comment; I do still read slashdot comments sometimes -- I'm trying to stop, honest!) and was kind of bothered. I think I'm upset any time I hear of discussions of techne (technique, if you like) for attracting romantic partners. I'm surely one of those "unclear on the concept". Actually, my issue with these discussions is pretty clear to me now. That's what a year of talking structure and interpretation of romantic relationships has done. I'm deeply troubled by a role for skill or knowledge in romance. Eric is saying, not only that there is such a role, but that he'll teach it! I.) I didn't beware them last year. Does that admonition apply only to Caesar, or to the general public, too? Cards Assuming even shuffling, how many shuffles on average would return a deck to it's original order? If by "even shuffling", you mean jmg, thanks for your suggestions. I think we can assume that runs always contain at least one card, by definition; if a shuffler misses an attempted run, it just makes the older run continue longer (although I guess the idea is that the probabilities should be different: there's a higher probability of some much longer runs in that case). Your point about the difference between the (remaining?) size of the two halves is something I actually addressed yesterday, before I read your note. I don't have a good solutions to it. Do people compensate to try to keep the absolute sizes of the halves equal? Or are they trying to keep the rates at which the halves are exhausted equal? These lead to very different models! I've been thinking that people are trying to keep the rates even, which doesn't require any adjustment when they notice that the halves have different numbers of cards (because they don't care about that). Your suggestion seems not to be that the average run length should be larger from the side which has more cards, but rather that, when the shuffler notices a significant discrepancy, he will probably try to correct for it by causing a single long run then and there, to even out the two halves. Is that correct? How much control do shufflers normally exert over the rates at which they let cards full? Not how much control they are capable of (for some extremely adept shufflers, that's perfect control!), but how much control they actually bring to bear after they've started to riffle the cards. In my case, as a poor shuffler, I find it very hard to slow down at all once the cards have begun to fall. Applications? (A story about a conversation about a card trick) So, way back in 1995 I was once talking with this physics student about a card trick in the lounge of a college dorm (which happened to be in Haifa, Israel). When we had talked about it for twenty minutes, a friend wandered by and said "You're still talking about that card trick?" "Yep", we said. They said "Well, we're going to dinner." When they came back from dinner, we were still sitting at that same table. "You're not still talking about that card trick?" But the pad of paper with sketches of hypothetical patterns of shuffles interleaving told the tale. "You're actually still talking about it. OK, we're going off to do some work." So they went off and studied a bit. Of course, we were unperturbed, and continued our conversation, so that when those people came by again and said "We're on our way out to a movie -- I bet you'll still be talking about that card trick when we get back", we just kind of nodded sagely. And off they went to a movie, and we kept on scribbling and arguing about runs and sequences and ambiguities and averages. When the movie ended, of course, we were discovered not to have moved an inch, although we had consumed several more sheets of paper, which were scattered around the table. They talked about chains and immunity to cuts and about cyclic order and cyclic chaining and abitrary cut-points and probably something like "true effective cyclic chain length". The deck of cards we'd had was stacked into eight neat little piles, and we were still trying to make consensus estimates of some probabilities. "You're still talking about that card trick! Look, everybody, Seth and Uri have been here since before dinner talking about a card trick." And it was true. Four hours, all told, if I remember correctly, and four very pleasant hours they were, too. So there actually is a connection to the program I'm writing. The card trick in question is one I've already mentioned in my Advogato diary (July 17 of last summer); Knuth actually mentions it in an exercise in The Art of Computer Programming. The basic application is that, if I can actually get a realistic model of how people shuffle, I can find out how likely there is to be exploitable non-randomness in a deck after a small number of shuffles. You know, it's kind of like TCP sequence prediction attacks, only more useless. Anyway, I'm working on a simulation that tries to undo small numbers of shuffles. Happy pi day. IBM Linux ads IBM's ad agency seems to really, really want me to be in an ad for Linux. It's tempting. I think they may do it right, in the sense of "honestly and respectfully": the ad agency reps seem to accept that free software is different from proprietary software and to consider it a good thing. I hope they're not harping on the "counterculture" note too hard; in a sense, it's a very difficult problem. I read Commodify Your Dissent by Thomas Frank and found it very challenging; one might have predicted from the discussion of IBM there that IBM would soon be running ads about subversive, revolutionary programmers (and Frank would probably say that this is shallow or counterproductive, because IBM has zero permanent alliance with the values of free software or even with the development model). There are conflicting tendencies. On the one hand: On the other hand: Celebrity counterculture (as opposed to general awareness, understanding, and respect) is not the best thing that can happen to free software. And I'm a little concerned that the IBM ads may be heading in that direction. Remember: ?' (Deuteronomy 30:12-3 (RSV)) Haircut I got one. No beard cutting as yet. Perl Jamie's interpretation of a story about fortune, "Johnny Mnemonic", and the police: You will be miserable until you learn the difference between scalar and list context. I wrote a letter to the Chronicle which they didn't print but which somehow ended up in the hands of Richard Stallman. He liked it; it owes a lot to his essay Re-evaluating Copyright: The Public Must Prevail. Cards I didn't make much progress on my Python card-shuffling code, but I looked at some output, and it looked pretty realistic. There are even discernible effects from the fact that people don't cut a deck exactly half and half: this means that there will probably be an extra-long run from one half either at the top or at the bottom. (There's also another model possible in which the average length of the individual runs from one half is longer than the average length from the other. I'm not sure which is more physically plausible. I could be very brave and write to Persi Diaconis and S. Brent Morris and ask: I think one of them might know offhand.) VoiceXML Sorry to mention too much stuff from slashdot here in one day. (I had independent sources for the other two, I promise!) Does anyone have experience with VoiceXML? I looked at Eve Andersson's article about VXML and was pretty impressed; I called up TellMe -- and you can too -- and listened to her program read me 100 digits of pi when I asked it. Come on, try it out, in honor of pi day. ("Call 1-800-555-TELL. At the main menu, speak the word 'Extensions.' Enter extension 58874.") But anyway, I signed up for a developer account at TellMe Studio and quickly realized that but I now have a little application that will greet me in my own voice and then synthesize the comment that "This is cool". On the phone, over an 800 number. So conceivably I could not only have a web page which can tell people what the weather is like here, or whether my coffee pot is on or off (if I actually had a coffee pot), but I could actually have an 800 number people could call and, in my own voice, it could tell them those things. It seems that you have to pay to make your VoiceXML applications available to the public; I guess that's only fair. On the Python edu-sig mailing list, there was some nice discussion about a class to represent a deck of cards, with some code which I got to contribute a bit to. I managed to write somewhat simple in_shuffle() and out_shuffle() functions which perform (virtual) perfect faro shuffles on a deck. Using these, it's easy to watch what these shuffles do, and to verify experimentally that "eight out-shuffles or fifty-two in-shuffles will return a deck [of fifty-two cards] to its original order" (quoting Gardner from memory). So that's fun. What I really want, as I wrote on edu-sig, is a good and realistic model for how most people -- you know, other than accomplished magicians and professional gamblers -- riffle shuffle. This is very tricky. I have written a function called casual_shuffle() which it would probably be easier to quote than to describe. (Source code is great for concise expression.) But: I assume that in a casual riffle shuffle, a person is equally likely for any n between 0 and 52/X to divide the deck into "halves" of 26-n and 26+n. So if X=10, dividing the deck at 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, and 31 are considered equiprobable. (It should probably be a Gaussian distribution instead.) Now cards are dealt in "runs" from alternate halves, starting with a random half. The length of a run is potentially limited only by the size of the half of the deck from which cards in the run are being dropped, but in my model it is less and less likely that a run will continue as the run gets longer and longer -- the incidence of longer runs should be less than the incidence of shorter runs. That part is correct -- but beyond that I don't know what's realistic, and so I have a decay model, where each successive card in a run diminishes (by a certain ratio) the probability that the run will continue. It is possible to choose the decay ratio. OK, here's the actual casual_shuffle() function: def casual_shuffle(self, split_proportion=10, run_decay=0.8): # Simulate the riffle-shuffling pattern of a normal human, who # doesn't know how to do a perfect faro shuffle. L = len(self.cards)/2 margin = L/split_proportion splitpoint = random.choice(range(L-margin, L+margin+1)) a = self.cards[:splitpoint] b = self.cards[splitpoint:] c = [] source = random.choice([a, b]) while a or b: p = 1.0 while source: if random.random() >= p: print p, " exceeded." break c.append(source.pop()) p = p * run_decay if source is b: source = a else: source = b c.reverse() self.cards =!
http://www.advogato.org/person/schoen/diary.html?start=231
CC-MAIN-2014-52
refinedweb
2,355
70.94
Biking data from XML to analysis, part 4 One of the main reasons this project turned out to be interesting is that time series data has all kinds of gotchas. I never had to deal with a lot of this before, because the sorts of time series I did in my scientific life didn’t care about real-life things like time zones. We mostly just cared about calculating time elapsed. …tick…tick…tick Anyway one thing I wondered about with the bike data was, can we compare average speeds in the morning vs. the afternoon? But to do that, I first had to parse the datetime objects and put them in the right time zone. Since indexes are immutable in pandas, if you want to do any parsing on them, you have to do it with the information in a regular column. So I had to back up a step to before I made the timestamp into the index. I ended up using dateutil to do the parsing, and pytz to convert the timezone. import pandas from dateutil.parser import parse import pytz df = pandas.read_csv("sorted_by_date.csv", index_col=0) df['parsed']=[parse(x) for x in df['StartTime']] df['parsed'].head() Out: 1 2013-01-02 15:51:51+00:00 2 2013-01-03 00:20:26+00:00 3 2013-01-04 15:46:52+00:00 4 2013-01-04 23:59:27+00:00 df['zoned'] = [x.astimezone(pytz.timezone('US/Pacific')) for x in df['parsed']] df['zoned'].head() Out: 1 2013-01-02 07:51:51-08:00 2 2013-01-02 16:20:26-08:00 3 2013-01-04 07:46:52-08:00 4 2013-01-04 15:59:27-08:00 5 2013-01-07 07:56:14-08:00 That weird -08:00 on the end is the time zone adjustment. In San Francisco, we’re 8 hours off from Greenwich Time (aka UTC). This map is kind of goofy looking, but it’s very clear, and you can zoom in for more information. Then it occurred to me that I could just plot the hours, before sorting, to have some idea what to expect. df['hours']=[x.hour for x in df['zoned']] import matplotlib.pyplot as plt import seaborn as sns sns.set_palette("deep", desat=0.6) sns.set_context(rc={"figure.figsize": (8,4)}) df['hours'].hist() plt.xlabel('hour of the day(correct timezone)') plt.ylabel('frequency') So that presented a hypothesis: maybe the way to have really high average speeds in the city is to ride really early, when there’s no traffic. (I’ll admit to having ridden that early myself, and let’s just say if you want to go fast, it’s either that or come home very late at night.)
https://szeitlin.github.io/posts/biking_data/biking-data-from-xml-to-plots-part-4/
CC-MAIN-2022-27
refinedweb
471
72.26
OpenSSL is deprecated in LionFor example with the following deprecated.csource code: #include <openssl/crypto.h> int main(void) { OPENSSL_init(); return 0; } We get a compilation warning: $ gcc deprecated.c -lcrypto deprecated.c: In function ‘main’: deprecated.c:5: warning: ‘OPENSSL_init’ is deprecated (declared at /usr/include/openssl/crypto.h:600) Line 600 of /usr/include/openssl/crypto.his: void OPENSSL_init(void) DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER; and is replaced by Common Cypto Common Crypto is Apple "own" implementation of low level crypto algorithms. See the CC_crypto(3cc)man page. The manage is also available online at and says: CC_crypto(3cc) LOCAL CC_crypto(3cc) NAME Common Crypto -- libSystem digest library DESCRIPTION The libSystem Common Crypto library implements a wide range of cryptographic algorithms used in various Internet standards. The services provided by this library are used by the CDSA implementations of SSL, TLS and S/MIME. OVERVIEW libSystem. NOTES To use the digest functions with existing code which uses the corresponding openssl functions, #define the symbol COMMON_DIGEST_FOR_OPENSSL in your client code (BEFORE including <CommonCrypto/CommonDigest.h> ). You can *NOT* mix and match functions operating on a given data type from the two implementations; i.e., if you do a CC_MD5_Init() on a CC_MD5_CTX object, do not assume that you can do an openssl-style MD5_Update() on that same context. The interfaces to the encryption and HMAC algorithms have a calling interface that is different from that provided by OpenSSL. SEE ALSO CC_MD5(3cc), CC_SHA(3cc), CCHmac(3cc), CCCryptor(3cc) BSD April 5, 2007 BSD NotesThe man page is quiet old (April 2007) and references CDSA. CDSA has also been deprecated in Lion but we will talk about that later. Common Crypto should also be available in Leopard (the man page exists for 10.5). So you can update you project to use Common Crypto for Lion and the same source code could be used on Snow Leopard (and maybe even Leopard) ConclusionOpenSSL should be removed in a later Mac OS X version. For projects using OpenSSL on Mac OS X you have two options: - move from OpenSSL to Common Crypto - provide your own version of OpenSSL in the installer (or use a static link)
https://ludovicrousseau.blogspot.com/2011/08/mac-os-x-lion-and-openssl.html
CC-MAIN-2017-22
refinedweb
360
54.83
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How can I populate selection type field when triggered by on_change method? How can I display values on my second selection field type based on the value of the first selection field type. Example, - First Selection has values {('1','Stock'),('2','Output')}. - Second selection should display values depending on the key of my first selection. Code: def on_change_location_dest(self, cr, uid, ids, location_dest, context=None): x = {} x = location_dest sql_res = cr.execute("select sm.picking_id, sp.name from stock_move sm inner join stock_picking sp on sm.picking_id = sp.id where sm. <newline /> <field name="reference_code" /> About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-can-i-populate-selection-type-field-when-triggered-by-on-change-method-33016
CC-MAIN-2018-17
refinedweb
149
50.94
Wrapper package for OpenCV python bindings. Project description and you should select only one of them. Do not install multiple different packages in the same environment. There is no plugin architecture: all the packages use the same namespace ( cv2). If you installed multiple different packages in the same environment, uninstall them all with pip uninstalland reinstall only one package. a. Packages for standard desktop environments (Windows, macOS, almost any GNU/Linux distribution) - run pip install opencv-pythonif you need only main modules - run pip install opencv-contrib-pythonif you need both main and contrib modules (check extra modules listing from OpenCV extra modules listing from OpenCV documentation) Import the package: import cv2 All packages contain haarcascade files. cv2.data.haarcascadescan be used as a shortcut to the data folder. For example: cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") Read OpenCV documentation Before opening a new issue, read the FAQ below and have a look at the other issues which are already open. Frequently Asked Questions Q: Do I need to install also OpenCV separately? A: No, the packages are special wheel binary packages and they already contain statically built OpenCV binaries. Q: Pip. Q: Import fails on Windows: ImportError: DLL load failed: The specified module could not be found.? A: If the import fails on Windows, make sure you have Visual C++ redistributable 2015 installed. If you are using older Windows version than Windows 10 and latest system updates are not installed, Universal C Runtime might be also required. Windows N and KN editions do not include Media Feature Pack which is required by OpenCV. If you are using Windows N or KN edition, please install also Windows Media Feature Pack. If the above does not help, check if you are using Anaconda. Old Anaconda versions have a bug which causes the error, see this issue for a manual fix. If you still encounter the error after you have checked all the previous solutions, download Dependencies and open the cv2.pyd (located usually at C:\Users\username\AppData\Local\Programs\Python\PythonXX\Lib\site-packages\cv2) file with it to debug missing DLL issues. Q: I have some other import errors? A: Make sure you have removed old manual installations of OpenCV Python bindings (cv2.so or cv2.pyd in site-packages). Q: Why the packages do not include non-free algorithms? A: Non-free algorithms such as SIFT and SURF are not included in these packages because they are patented and therefore cannot be distributed as built binaries. See this issue for more info: Q: Why the package and import are different (opencv-python vs. cv2)? A: It's easier for users to understand opencv-python than cv2 and it makes it easier to find the package with search engines. cv2 (old interface in old OpenCV versions was named as cv) is the name that OpenCV developers chose when they created the binding generators. This is kept as the import name to be consistent with different kind of tutorials around the internet. Changing the import name or behaviour would be also confusing to experienced users who are accustomed to the import cv2. Documentation for opencv-python The aim of this repository is to provide means to package each new OpenCV release for the most used Python versions and platforms. Build process The project is structured like a normal Python package with a standard setup.py file. The build process for a single entry in the build matrices is as follows (see for example appveyor.yml file):. Releases A release is made and uploaded to PyPI when a new tag is pushed to master branch. These tags differentiate packages (this repo might have modifications but OpenCV version stays same) and should be incremented sequentially. In practice, release version numbers look like this: cv_major.cv_minor.cv_revision.package_revision e.g. 3.1.0.0 Development builds Every commit to the master branch of this repo will be built. Possible build artifacts use local version identifiers: cv_major.cv_minor.cv_revision+git_hash_of_this_repo e.g. 3.1.0+14a8d39 These artifacts can't be and will not be uploaded to PyPI. Manylinux wheels Linux wheels are built using manylinux..
https://pypi.org/project/opencv-openvino-contrib-python/
CC-MAIN-2020-40
refinedweb
693
55.54
Hello and thank you for using Just Answer,The Kentucky carryback on net operating losses was stopped in 2005. The losses were only allowed to be carried forward after 2005. The Kentucky assessment was based on the Federal change. So, unfortunatley, even though you amended your 2009 federal and were able to use the NOL to your federal advantage, the Kentucky return does not allow for a carryback and you would need to carry forward any loss for Kentucky purposes. So my question would remain "In what year can I start the carry forward?"; opp's, as the additional assessment was due to an amended 2008 return and my amended 2009 return resulted in the loss, can I claim the loss on my State return for 2009 and carry forward to 2010 and 2011? If you filed the return (amended) for 2009, then the carry forward fro Kentucky would be to 2010 and forward. sorry* for Kentucky Thank you in advance for a positive rating so Just Answer will know you were assisted.
http://www.justanswer.com/tax/7ai13-recently-audited-2008-federal-tax-resulted.html
CC-MAIN-2014-41
refinedweb
173
66.67
[ ] ASF GitHub Bot commented on JUNEAU-49: -------------------------------------- Github user clr-apache commented on a diff in the pull request: --- Diff: juneau-core/src/main/java/org/apache/juneau/internal/StringUtils.java --- @@ -618,7 +618,9 @@ public static boolean isEmpty(String s) { * @return <jk>true</jk> if specified string is <jk>null</jk> or it's {@link #toString()} method returns an empty string. */ public static boolean isEmpty(Object s) { - return s == null || s.toString().isEmpty(); + if( s == null ) return true; + if( s instanceof List ) return ((List) s).size() == 0; --- End diff -- There is a big discussion on stackoverflow on this topic. I like the simple "".equals(s) which handles the null case. If you include s being an instance of collection, it seems that you could use if (s instanceof Iterable) return (Iterable) s).isEmpty(); Then even user-defined types that implement Iterable can be used. > Support beans as arguments for GET calls using @RemoteableMethod > ---------------------------------------------------------------- > > Key: JUNEAU-49 > URL: > Project: Juneau > Issue Type: Improvement > Reporter: Steve Blackmon > > The standard use case for wrapping a GET call with query parameters in a rest proxy is to enumerate each of the possible get params in order and attach @query to each. > However, when the number of possible parameters becomes large and many are optional, the ability for the proxy to extract multiple params from a bean would make implementation much more readable. > Enable pojo arguments annotated @Query("\*") or @QueryIfNE("\*”) in a method annotated @RemoteableMethod(httpMethod="GET") to serve as a source for multiple parameters (all query parameters for now) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
https://mail-archives.eu.apache.org/mod_mbox/juneau-dev/201705.mbox/%3CJIRA.13073593.1495235608000.317071.1496107864153@Atlassian.JIRA%3E
CC-MAIN-2021-39
refinedweb
267
51.68
Fade Author: Eric Haines (Eric5h5) Description Fade GUITextures or anything that has a material (including GUIText objects) at any time with these simple coroutines that can be called from anywhere. Fade alpha values in and out, or fade from one color to another, with optional ease in/out. You can also use an array of colors. Usage These functions are coroutines, so you normally don't use them in Update (which always runs once every frame and is therefore not suitable for scheduling events). The exception is if you're just launching a Fade routine once at a specific time; see below for an example of making a GUIText fade when hitting a key. Normally you'd use Fade functions from a coroutine. Put this script in your Standard Assets/Scripts folder or Plugins folder; this way it can be easily used from C# or Boo. It should be named "Fade". The script must be attached to some object in the scene, such as an empty object used for game manager scripts. If you don't feel like reading a bunch of text, see below for some examples. Otherwise keep reading for all the technical details.... To do alpha fading, call Fade.use.Alpha(), which has these parameters: function Alpha(object : Object, start : float, end : float, timer : float, easeType : EaseType = EaseType.None) : IEnumerator The object should be either a GUITexture or a Material. Any other type will print a warning message at run-time, and the function won't do anything. This works fine on the default material for GUIText objects, but other objects should have a material which uses a Transparent shader of some kind in order for it to have any visible effect. The alpha value will fade from start to end, over timer seconds. Therefore, fading in would go from 0.0 through 1.0, and fading out would go from 1.0 through 0.0. You don't have to use 0 or 1; you could fade in part way, for example, by going from 0.0 through 0.5. The optional easeType is None, In, Out, or InOut. EaseType.None is the default and is a straight linear fade. EaseType.In will more gradually start the fade. The effect is slightly subtle, but noticeable, since fading in alpha with a linear fade tends to look a little abrupt. EaseType.Out will more gradually end the fade, and would obviously be best used at the end of a fadeout for the same reason. As you would expect, EaseType.InOut eases at both ends. To do color fading, call Fade.use.Colors(): function Colors(object : Object, start : Color, end : Color, timer : float, easeType : EaseType = EaseType.None) : IEnumerator This works the same as Alpha, except that start and end are colors instead of alpha values (although the colors themselves can contain transparency if desired). To use a range of colors: function Colors(object : Object, colorRange : Color[], timer : float, repeat : boolean) : IEnumerator In this case, colorRange is an array of colors (Color[]), timer is how many seconds it takes to cycle through back to the first color, and repeat is whether the colors should be cycled through once (if false), or infinitely (if true). To stop an infinitely cycling Colors function, call Fade.use.StopAllCoroutines(). If colorRange has fewer than 2 entries, a warning message will be printed at run-time and the function won't do anything. Note on GUITextures: these are capable of overbrightening colors, where a value of .5 means 100%, and 1.0 means 200%. It's likely that this feature isn't actually used much, and it's not always easy to remember that an alpha of, say, .25 is actually 50% transparent when using a GUITexture. So all the Fade functions compensate for this, which allows you to use 1.0 to mean 100% as usual. You can still overbrighten if desired by using values from 1.0 - 2.0, though colors like this can't be specified in the Inspector and must be created in code. Examples Fade out a GUITexture over 3 seconds: Fade.use.Alpha(guiTexture, 1.0, 0.0, 3.0); Fade in a GUITexture over 5 seconds, easing in at the beginning: Fade.use.Alpha(guiTexture, 0.0, 1.0, 5.0, EaseType.In); Fade in another object which has a GUITexture component over 2 seconds, wait 1.5 seconds, then fade out again and deactivate the object: var title : GameObject; function Start () { yield Fade.use.Alpha(title.guiTexture, 0.0, 1.0, 2.0, EaseType.In); yield WaitForSeconds(1.5); yield Fade.use.Alpha(title.guiTexture, 1.0, 0.0, 2.0, EaseType.Out); title.SetActive (false); } Fade a GUIText from green to blue over 1 second: Fade.use.Colors(guiText.material, Color.green, Color.blue, 1.0); Make a GUIText pulse from red to yellow several times, where each pulse takes 1.5 seconds: function Start () { guiText.text = "Warning!"; var colors = [Color.red, Color.yellow]; for (i = 0; i < 3; i++) { yield Fade.use.Colors(guiText.material, colors, 1.5, false); } } Make a GameObject continuously cycle through an array of colors defined in the Inspector, but stop after 30 seconds: var colors : Color[]; function Start () { Fade.use.Colors(renderer.material, colors, 5.0, true); yield WaitForSeconds(30.0); Fade.use.StopAllCoroutines(); } Make a GUIText object fade out over one second after pressing the "A" key: private var faded = false; function Start () { guiText.text = "Press 'A'"; } function Update () { if (!faded && Input.GetKeyDown(KeyCode.A)) { faded = true; Fade.use.Alpha(guiText.material, 1.0, 0.0, 1.0); } } Make a GameObject fade from black to white with easing, in C#: using UnityEngine; public class Test : MonoBehaviour { void Start () { StartCoroutine(Fade.use.Colors(renderer.material, Color.black, Color.white, 2.0f, EaseType.InOut)); } } Note: To start a coroutine you probably want to do it in the Fade.use (So instead of StartCoroutine you just call Fade.use.StartCoroutine), and then call Fade.use.StopAllCoroutines() when you want to stop them again. This prevents you from stopping other coroutines in your script, while still being able to stop the fading. JavaScript - Fade.js enum EaseType {None, In, Out, InOut} static var use : Fade; function Awake () { if (use) { Debug.LogWarning("Only one instance of the Fade script in a scene is allowed"); return; } use = this; } function Alpha (object : Object, start : float, end : float, timer : float) { yield Alpha(object, start, end, timer, EaseType.None); } function Alpha (object : Object, start : float, end : float,.a = Mathf.Lerp(start, end, Ease(t, easeType)) * .5; else (object as Material).color.a = Mathf.Lerp(start, end, Ease(t, easeType)); yield; } } function Colors (object : Object, start : Color, end : Color, timer : float) { yield Colors(object, start, end, timer, EaseType.None); } function Colors (object : Object, start : Color, end : Color, = Color.Lerp(start, end, Ease(t, easeType)) * .5; else (object as Material).color = Color.Lerp(start, end, Ease(t, easeType)); yield; } } function Colors (object : Object, colorRange : Color[], timer : float, repeat : boolean) { if (!CheckType(object)) return; if (colorRange.Length < 2) { Debug.LogError("Error: color array must have at least 2 entries"); return; } timer /= colorRange.Length; var i = 0; var objectType = typeof(object); while (true) { var t = 0.0; while (t < 1.0) { t += Time.deltaTime * (1.0/timer); if (objectType == GUITexture) (object as GUITexture).color = Color.Lerp(colorRange[i], colorRange[(i+1) % colorRange.Length], t) * .5; else (object as Material).color = Color.Lerp(colorRange[i], colorRange[(i+1) % colorRange.Length], t); yield; } i = ++i % colorRange.Length; if (!repeat && i == 0) break; } } private function Ease (t : float, easeType : EaseType) : float { if (easeType == EaseType.None) return t; else if (easeType == EaseType.In) return Mathf.Lerp(0.0, 1.0, 1.0 - Mathf.Cos(t * Mathf.PI * .5)); else if (easeType == EaseType.Out) return Mathf.Lerp(0.0, 1.0, Mathf.Sin(t * Mathf.PI * .5)); else return Mathf.SmoothStep(0.0, 1.0, t); } private function CheckType (object : Object) : boolean { if (typeof(object) != GUITexture && typeof(object) != Material) { Debug.LogError("Error: object is a " + typeof(object) + ". It must be a GUITexture or a Material"); return false; } return true; }
https://wiki.unity3d.com/index.php/Fade
CC-MAIN-2019-39
refinedweb
1,343
59.7
Custom auto unicodes? - RafaŁ Buchner last edited by gferreira I'm wondering if is that possible to set my own custom unicode to name converter, that will work with RF's default auto naming? (And if yes, how to do it? ;) ) I hope this question is clear. hi. not sure what you mean… can you provide some context or give an example of where and how you would use this? thanks. - RafaŁ Buchner last edited by Example: when I'm using "add Glyphs", from "Font" top menu, and checking "Add Unicode" checkbox, is it possible to provide my own name-to-unicode scheme or dictionary to do this? So for example, by default, to have a glyph with unicode 00DFI need to write down germandblsto "add Glyphs". But I'm looking for possibility of making my own naming-standard, where , for example if I'll write down germandoublesin add Glyphs, it will automatically assign 00DFunicode. Advantage: If I could define the path (in preferences) to the file that contains pairs name-to-unicode, I could easily manage my naming system for non-latin scripts. (I hope it is clear, somehow it is hard for me to explain it in an easy way) Of course, I could write down my own extension for it, but before I do that, I would like to know if it is possible to set it in RF. ok, thanks for explaining. the Add Glyphs sheet allows you to set custom unicodes when adding a new glyph. for example: germandoubles|00DF I find it more practical to set unicodes using a script, so each project can have its own custom mappings. something like this: def autoUnicodes(glyph, customUnicodes={}): if glyph.name in customUnicodes: glyph.unicodes = [customUnicodes[glyph.name]] else: glyph.autoUnicodes() customUni = {'germandoubles' : '00DF'} g = CurrentGlyph() autoUnicodes(g) there are probably other ways to do it… hope this helps! here’s another option, which I think is the one you’re looking for: you can create your own custom GNFUL list… # GNFUL germandoubles 00DF …and then add it in Preferences > Character Set, so it will be used for all fonts: these custom unicodes are also picked up by glyph.autoUnicodes(), which is quite nice. (thanks @frederik for clarifying it via chat :) A full GNFUL list is preferable... as it will have no fallback for other entries. Generate your GNFUL list with a glyphNameFormatter generator: see (embedded in RoboFont) good luck! - RafaŁ Buchner last edited by Thanks! That is the exactly what I was looking for
https://forum.robofont.com/topic/595/custom-auto-unicodes/7
CC-MAIN-2019-22
refinedweb
419
63.19
Yes, it’s true!! Now, object detection models can be done with just 5 lines of code using ‘Detecto’ Introduction: Computer vision (CV) in the field of study in Artificial intelligence where we try to solve many problems using image or video which could be solved trivially also. But, CV helps to increase the speed and the range of solving the problem with ease. Object Detection is one of the most popular streams under computer vision. It has many applications across different industries such as Manufacturing, Pharmaceutical, Aviation, Retail, E-Commerce, etc. In the real-world scenario, we have to train the object detection model on the custom datasets. Building custom trained object detection model is not very straightforward irrespective of the framework i.e. TensorFlow or PyTorch. In this article, we are going to discuss developing custom trained object detection model using ‘Detecto’ which is a Python package that allows you to build fully functioning computer vision and object detection models with just 5 lines of code. And Yes it is true!! It is built upon PyTorch which made the whole process very easy. Let’s do together one object detection project end to end using Detecto. Here we will try to develop a model which will detect the wheels and headlight of the car from the car images. Fun Begins!! Table of Contents: - Step1: Image collection and Labelling - Step2: Installation of the required package - Step3: Custom image augmentation - Step4: Model Training - Step5: Model saving, loading, and predicting Step1: Image collection and labeling: The first step of any object detection model is collecting images and performing annotation. For this project, I have downloaded 50 ‘Maruti Car Images’ from google image. There is a package called simple_image_download which is used for automatic image download. Feel free to use the following code: from simple_image_download import simple_image_download as simp response = simp.simple_image_download lst=[‘Maruti car’] for rep in lst: response().download(rep, 50) With this code, we will get 50 downloaded images in our ‘Maruti Car’ folder of the working directory. Feel free to change the number of images to as many as you want. After that, we will randomly split images into two parts i.e. Train (35 images) and Test(15 images) The next job is labeling the images. There are various image annotation tool is available. For this project, I have used MAKESENSE.AI. It’s a free online tool for labeling. No installation process is required. We can open it using the browser only. Using the link, I dropped my car images and did annotation for Train and Validation datasets separately. Now, we can export the annotation in XML format as ‘Detecto’ supports it. Then we have placed XML files of train and validation images in the Train and validation folder respectively. So the folder tree looks like this: Pic2: Folder Tree Step2: Installation of the required packages: As it is already mentioned that ‘Detecto’ is built on top of the PyTorch, we need to first install PyTorch. I have used Google Colab for this project. Then we need to check whether we have the support of GPU or not using the following code: import torch print(torch.cude.is_available()) If the print is ‘True’, it means you can use GPU. If it is ‘False’, please change the ‘Hardware Accelerator’ of the Notebook Setting to ‘GPU’. Now, your system is ready with the requisition to install ‘Detecto’. Use the following magic code to install it. !pip install detecto Once it’s done, let’s import the libraries using the following code: from detecto import core, utils, visualize from detecto.visualize import show_labeled_image, plot_prediction_grid from torchvision import transforms import matplotlib.pyplot as plt import numpy as np Step3: Custom image augmentation: Image augmentation is the process of artificially expanding data by creating a modified version of images. Detecto has an inbuilt function to do custom transform by applying to resize, flip, and saturation augmentation. Please, use the following code for augmenting the image dataset. custom_transforms = transforms.Compose([ transforms.ToPILImage(), transforms.Resize(900), transforms.RandomHorizontalFlip(0.5), transforms.ColorJitter(saturation=0.2), transforms.ToTensor(), utils.normalize_transform(), ]) Step4: Model Training: Now, we have come to that most awaited step i.e. Model Training. Here, magic happens in Five lines of code. Train_dataset=core.Dataset(‘Train/’,transform=custom_transforms)#L1 Test_dataset = core.Dataset(‘Test/’)#L2 loader=core.DataLoader(Train_dataset, batch_size=2, shuffle=True)#L3 model = core.Model([‘Wheel’, ‘Head Light’])#L4 losses = model.fit(loader, Test_dataset, epochs=25, lr_step_size=5, learning_rate=0.001, verbose=True)#L5 In the first two lines of code(L1 & L2), we have assigned Train and Test dataset. In L3, we have created DataLoader over our dataset. It helps define how we batch and feed our images into the model for training. Feel free to experiment by changing ‘batch_size’. Now, it’s time to mention the ‘Labels’ or ‘classes’ which are made in L4. Finally, model training will be started via ‘model.fit’ in L5. Here, we can play with different options such as epochs, lr_step_size, and learning rate’. The default model is Faster R-CNN ResNet-50 FPN. We have fine-tuned this model for our custom dataset. Now, we can look at the loss function using the following code: plt.plot(losses) plt.show() Pic3: Loss Function Plot Step5: Model saving, loading, and predicting: Once we are satisfied with a model loss, we need to save the model for future reference. So that we can load it as and when required. Use the following code for saving and loading. model.save(‘model_weights.pth’) model = core.Model.load(‘model_weights.pth’, [‘Wheel’, ‘Head Light’]) After loading the model, we want to use it for prediction. Let’s use it for one observation from the Test folder and plot the image with a bounding box. Here, the prediction format is labels, boxes, and scores. image = utils.read_image(‘Test/Maruti car_27.jpeg’) predictions = model.predict(image) labels, boxes, scores = predictions show_labeled_image(image, boxes, labels) Pic4: Plotting Bounding box without Threshold There are many unwanted bounding boxes in the above picture. So, we have to remove them. The simplest way to solve the issue is by providing a threshold on the score. For this project, I have put the threshold as 0.6 for both classes. I came to this point through different trials and errors. Use the following code to set up the threshold for bounding boxes and plotting them. thresh=0.6 filtered_indices=np.where(scores>thresh) filtered_scores=scores[filtered_indices] filtered_boxes=boxes[filtered_indices] num_list = filtered_indices[0].tolist() filtered_labels = [labels[i] for i in num_list] show_labeled_image(image, filtered_boxes, filtered_labels) Now, we can see the final output. And yes, it is quite impressive. So, this the end of the project. Let us know your opinion after using this one for your custom dataset. Happy learnings !!!! AlanBi is the creator of this package. Kudos to him for making it open for all. Reference: - Blog from AlanBi: Your suggestions and doubts are welcomed here in the comment sections. Thank you for reading my article. About the author: Tirthankar Das: Data Science professional with 7+ years experience in different domains such as Banking &Finance, Aviation, Manufacturing, and Pharmaceuticals. Happy to connect over LinkedIn The media shown in this article on Sign Language Recognition are not owned by Analytics Vidhya and are used at the Author’s discretion. You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/06/simplest-way-to-do-object-detection-on-custom-datasets/
CC-MAIN-2021-25
refinedweb
1,232
58.08
A year ago, a colleague of mine showed me a very interesting framework named Krank (latter renamed to Crank because the previous name means “sick” in German, which does not bode well to any framework). Crank’s goal was to ease development on top of Java Persistence API 1.0. Two interesting features caught my attention at the time: - a generic DAO which implements CRUD operations out-of-the-box. This is a Grail of sort, just try to Google for “Generic DAO” and watch the results: everyone seems to provide such a class. Whether each one is a success, I leave to the reader. - a binding mechanism between this generic DAO and named queries, releasing you from the burden of having to create the query object yourself Unfortunately, there’s no activity for Crank since 2008 and I think it can be categorized as definitely dead. However, and I don’t know if there’s a link, a new project has emerged and not only does it implement the same features but it also adds even more innovative ones. This project I’ve only recently discovered is project Hades, which goal is to improve productivity on the persistence layer in general and for JPA v2 in particular. It now definitely stands on top of my “Hot topic” list. In order to evaluate Hades, I’ve implemented some very simple unit tests: it just works, as is! Let’s have an overview of Hades features. Configuring Hades Hades configuration is based on Spring, whether you like it or not. Personally, I do since it makes configuring Hades a breeze. Hades uses a little known feature of Spring, namely authoring, in order to do that (for more info on the subject, see my previous Spring authoring article). Consider we already have a Spring beans configuration file and that the entity manager factory is already defined. Just add Hades namespace to the header and reference the base package of your DAO classes: <beans... xmlns:hades="" xsi: <hades:dao-config </beans> Since Hades uses convention over configuration, it will: - transparently create a Spring bean for each of your DAO interface (which must inherit from GenericDAO) in your configured package - reference it under the unqualified class name where the first letter is set to lower-case - inject it with the default entity manager factory and transaction manager provided they are respectively declared as “entityManagerFactory” and “transactionManager” Generic DAO Hades generic DAO has support for standard CRUD operations on single entities and whole tables, plus COUNT. Under the cover, it will use the injected entity manager factory to get a reference to an entity manager and use the latter for these operations. Named query Using a JPA’s named query needs minimal boilerplate code (but still) and do not provide a generics signature (which will need to be cast afterwards): Query query = em.createNamedQuery("findUsersByName").setParameter(name); List result = query.getResultList(); Hades, on the other hand, provides a binding mechanism between a named query and an interface’s methode name: public interface UserDao extends GenericDao { List<User> findUsersByName(String name); } Now, you just have to inject the DAO into your service and use it as is. Simple criteria query from method name Most of named queries you write manually are of the form SELECT * FROM MYTABLE WHERE A = 'a' and B = 'b' or other simple criteria. Hades can automatically generate and execute queries that are relevant to your method name. For example, if your DAO’s method signature is List findByLastNameOrAgeLessThan(String name, Date age), Hades will create the associated query SELECT * FROM User u where u.lastName = ?1 and u.age < ?2 and bind the passed parameters. Although I was a bit wary of such feature based on method’s name, I came to realized it ensures the semantics of the method is aligned with what it does. Moreover, there’s no code to write! Truly, I could easily fall in love with this feature… Enhancing your DAOs Crank’s generic DAO had a major drawback. If you wanted to add methods, you had to create a concrete DAO class, compose the DAO with the generic one, then delegate all the standard CRUD operations to the generic. You could then code the extra methods on your concrete DAO. At least, it is the design I came up with when I had to do it.This was not very complex since delegation could be coded with your favorite IDE, but it was a bore and you ended up with a very very long class full of delegating calls. Not what I call simple code. Hades designed this behaviour from the start. When you need to add methods to a specific DAO, all you have to do is: - create an interface with the extra methods - create a concrete class that implements this interface and code these methods. Since you use Spring, just reference it as a Spring bean to inject the entity manager factory - reuse the simple DAO interface and make it extend your specific interface as well as GenericDao (like before) Done: easy as pie and beautifully designed. What more could you ask for? Conclusion Hades seems relatively new (the first ticket dating back from April 2009) yet it looks very promising. Until then, the only “flaw” I may have see is transaction management: CRUD operations are transactional by default although, IMHO, transactionality should be handled in the service layer. However, it is relatively minor in regard to all the benefits it brings. Moreover, reluctance to use such a new project can be alleviated since it will join the Spring Data in the near future, which I take as a very good sign of Hades simplicity and capabilities. As for me, I haven’t use but taken a casual glance at Hades. Does anyone has used it in “real” projects yet? In which context? With which results? I would really be interested in your feedbacks if you have any. As usual, you can found sources for this article here. To go further: - Hades quickstart - Hades documentation - Hades Javadocs - Hades moving to Spring Data annoucement - Spring Data website From {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/hades-your-next-persistence
CC-MAIN-2016-07
refinedweb
1,035
59.33
is trying to read some data, then this exception will be thrown. This is a subclass of IOException. EOFException Example This exception is thrown when you reach the end of a stream (end of file, or peer closes the connection): - read() method returns -1 - readLine() method returns null - readXXX() for any other X throws EOFException. Here X means the method name would be readInt, readFloat, etc. Say, when you are trying to read any of these methods, it would expect some specified length of streams in the underlying FileInputStream, if that is empty then EOFException will be thrown. Also when you are using readInt and only float is available in the file will throw the same exception. Here is the example for an application which throws EOFException: import java.io.DataInputStream; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.IOException; public class ExceptionExample { public void testMethod1(){ File file = new File("test.txt"); DataInputStream dataInputStream = null; try{ dataInputStream = new DataInputStream(new FileInputStream(file)); while(true){ dataInputStream.readInt(); } }catch (EOFException e){ e.printStackTrace(); } catch (IOException e){ e.printStackTrace(); } finally{ try{ if (dataInputStream != null){ dataInputStream.close(); } }catch (IOException e){ e.printStackTrace(); } } } public static void main(String[] args){ ExceptionExample instance1 = new ExceptionExample(); instance1.testMethod1(); } } When you run the above program, you will get the following exception: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at logging.simple.ExceptionExample.testMethod1(ExceptionExample.java:16) at logging.simple.ExceptionExample.main(ExceptionExample.java:36) I have enforced the above program to throw the EOFException by making the while loop condition with true. This will make your program run even after reading all the data and throw the exception. Application programmer has to ensure that the data read from the stream is correct and closed when there is no more data in the stream I hope this tutorial provided good idea on the scenarios when EOFException is thrown from a Java application. If you have any questions, please write it in the comments section.
https://javabeat.net/eofexception/
CC-MAIN-2021-39
refinedweb
335
51.55
............................................1........ Abort-Session-Request.3............................1........................................... 9..... Permanent Failures.. 9... 7.......1........................2... Diameter User Sessions. Origin-State-Id AVP... User-Name AVP.....6...................................... Multi-Round-Time-Out AVP. 7......... 8...............14.......... Re-Auth-Request-Type AVP......... Result-Code AVP........... Accounting Records............ 8.... 8.....................1...............................19......... 7..........................2.........3..... 8.......... Application Document Requirements...... Protocol Errors........... 8.................................... Session-Server-Failover AVP. 9.5...... Class AVP................ 8. 8.........16... 8. Authorization-Lifetime AVP............1..9........ Accounting..................... 8......1........ Session-Termination-Answer.21..... 8....4. Authorization Session State Machine....3. Inferring Session Termination from Origin-State-Id........ Session Termination..................4.......4. 7............... 8.................................................3......1.......................................... Protocol Messages........ Correlation of Accounting Records.................... 8.. Abort-Session-Answer... Event-Timestamp AVP..... 8. 8....11........... Re-Auth-Request... 8........... Aborting a Session. Error Handling.................. 9.... 7. 9................. 7........................ Auth-Session-State AVP....2........1......... 8...........18....... Experimental-Result AVP... Error-Reporting-Host AVP............. Transient Failures....................... Termination-Cause AVP........................................... 7............... 8....... Re-Auth-Answer.............. Server Directed Model..5...12...3.13........1...5....... 9.....15............ 7...................4.2......1...........1. Accounting Session State Machine......... Session-Id AVP....................................... 9.1.................... 8.... Error Bit.2...... 8...4... Session-Binding AVP................. Error-Message AVP......... 8................................ 8.............................. Auth-Request-Type AVP..4. Failed-AVP AVP....... Fault Resilience............................. Success............. 8.. 8.......20..................8...... Server-Initiated Re-Auth.................. Auth-Grace-Period AVP....... Experimental-Result-Code AVP..........5.5....................... 8. 7. et al........................ 7.6.... Informational....10.6.............17.................... 8...............................2..................... Standards Track [Page 4] ..................................3... 7...2. 8..............7..... Session-Termination-Request....................RFC 3588 Diameter Based Protocol September 2003 7.........5......... Session-Timeout AVP......... 8....... .....RFC 3588 Diameter Based Protocol September 2003 Accounting Command-Codes............... Acct-Interim-Interval AVP. Redirect-Host-Usage AVP Values.. 11.... Accounting-Realtime-Required AVP..........4........ Acct-Session-Id AVP........ Result-Code AVP Values..1... 9..... Session-Server-Failover AVP Values..4.....4.. IPsec Usage.4.......... 11.......... 11.11...............2... Peer-to-Peer Considerations....3...........................................7..... 14...4....... References....... 11......... 13.. Accounting-Record-Type AVP Values............. et al......... Accounting-Answer. 9............ Termination-Cause AVP Values................ 9... Accounting AVPs... Accounting-Realtime-Required AVP Values........ Re-Auth-Request-Type AVP Values................ NAPTR Example. 9........2...2......2..... Base Protocol Command AVP Table....4........... 11.......................4... Appendix C.... Auth-Session-State AVP Values................7.......... Auth-Request-Type AVP Values...........7......................4............ 12...4.............1.8...... Duplicate Detection.......3..4....... 11. AVP Occurrence Table......4..... IANA Considerations.. 11.......... Diameter Service Template.... Accounting-Sub-Session-Id AVP.. Diameter Protocol Related Configurable Parameters............. Informative References..........................4...5.... Application Identifiers..............7.......... Acct-Multi-Session-Id AVP....... 11.... 11..........10....7.........5..........1. Command Flags.......2.........8.. 9........4....1...............................6....... Accounting-Record-Type AVP..... 9.......... Command Codes. 11...................9.......................... 10.........8.............. 13.. 14.................. Normative References.............1.......... Appendix B........................ 11...... 11............6. TLS Usage.. 11....8..................2..........2.......................... 14...........8......... Diameter Header..................... 11...... 11.................. Standards Track [Page 5] ............... 11. Security Considerations...........1................................. AVP Header..1... 9. Session-Binding AVP Values............ 11......1....8.............1....... Acknowledgements..........................2...5.....3.. Appendix A........... 10...... 9..............4............... 11......... 15..1. Accounting AVP Table....................... 11................8..............3..........1....... NAPTR Service Fields..................2....2. AVP Values......... AVP Flags.. 10............................2. 9.8..................... Disconnect-Cause AVP Values............ 9................................ Accounting-Request...... 13.... 11.... Accounting-Record-Number AVP............ 9..... AVP Code.8... Diameter TCP/SCTP Port Numbers..............6........... 13. 11............................... 11.... ........ [RADIUS] does not provide support for per-packet confidentiality.. with the growth of the Internet and the introduction of new access technologies... Over time.......... and TLS support is optional.. it is typically not possible to define separate trust or authorization schemes for each application.. including wireless...... While [RFC3162] defines the use of IPsec with RADIUS.and inter-domain AAA deployments. support for IPsec is not required... Since within [IKE] authentication occurs only within Phase 1 prior to the establishment of IPsec SAs in Phase 2.RFC 3588 Diameter Based Protocol September 2003 Appendix D.. While attribute-hiding is supported. 145 Authors’ Addresses... Calhoun. use is only required during Extensible Authentication Protocol (EAP) sessions. 146 Full Copyright Statement... Diameter supports application-layer acknowledgements.5 and [AAATRANS].. [RADACCT] assumes that replay protection is provided by the backend billing server.. Introduction Authentication. failover behavior differs between implementations..... Mobile IP and Ethernet..... In order to provide universal support for transmission-level security... DSL.. et al. Authorization and Accounting (AAA) protocols such as TACACS [TACACS] and RADIUS [RADIUS] were initially deployed to provide dial-up PPP [PPP] and terminal server access. Standards Track [Page 6] ..... putting new demands on AAA protocols. Intellectual Property Statement.... In order to provide well defined failover behavior. While [RADEXT] defines an additional authentication and integrity mechanism........... This is described in Section 5......... and as a result. and enable both intra.... routers and network access servers (NAS) have increased in complexity and density.. IPsec support is mandatory in Diameter.. and defines failover algorithms and the associated state machine..... These include: Failover [RADIUS] does not define failover mechanisms... Security is discussed in Section 13.... 147 1.... Transmission-level security [RADIUS] defines an application-layer authentication and integrity scheme that is required only for use with Response packets...... This limits the usefulness of IPsec in inter-domain AAA applications (such as roaming) where it may be desirable to define a distinct certificate hierarchy for use in a AAA deployment.. Network access requirements for AAA protocols are summarized in [AAAREQ]............ In accounting. rather than within the protocol itself. as well as within gateways enabling communication between legacy RADIUS devices and Diameter agents. In order to provide well defined transport behavior. as a result. Server-initiated messages While RADIUS server-initiated messages are defined in [DYNAUTH].8. Auditability RADIUS does not define data-object security mechanisms. This makes it difficult to implement features such as unsolicited disconnect or reauthentication/reauthorization on demand across a heterogeneous deployment. While implementation of data object security is not mandatory within Diameter. and does not define retransmission behavior. Initially. et al. Support for server-initiated messages is mandatory in Diameter. including Proxies. Diameter runs over reliable transport mechanisms (TCP. Diameter defines agent behavior explicitly. this is a major issue in accounting. Agent support [RADIUS] does not provide for explicit support for agents. and as a result. SCTP) as defined in [AAATRANS]. where packet loss may translate directly into revenue loss. reliability varies between implementations. As described in [ACCMGMT]. Standards Track [Page 7] . by addition of a gateway or server speaking both RADIUS and Diameter. Combined with lack of support for capabilities negotiation. support is optional. described in [NASREQ]. considerable effort has been expended in enabling backward compatibility with RADIUS. Transition support While Diameter does not share a common protocol data unit (PDU) with RADIUS. this makes it very difficult to determine what occurred in the event of a dispute. it varies between implementations. it is expected that Diameter will be deployed within new network devices. Redirects and Relays. untrusted proxies may modify attributes or even packet headers without being detected. Calhoun. these capabilities are supported. so that the two protocols may be deployed in the same network. Since the expected behavior is not defined.RFC 3588 Diameter Based Protocol September 2003 Reliable transport RADIUS runs over UDP. This capability. this is described in Section 2. and are described in [AAACMS]. enables Diameter support to be added to legacy networks. and is described in Section 8. Through DNS. Diameter enables dynamic discovery of peers. it remains feasible to implement Calhoun. However. and lacks auditability and transmission-level security features. and documented existing implementations (and imitations) of RADIUS-based roaming [PROXYCHAIN]. As a result. As a result. they may not be able to successfully negotiate a mutually acceptable service. Diameter includes support for error handling (Section 7). capability negotiation. Since RADIUS clients and servers are not aware of each other’s capabilities. or in some cases. while Diameter is a considerably more sophisticated protocol than RADIUS. detailed roaming requirements [ROAMCRIT]. Roaming support The ROAMOPS WG provided a survey of roaming implementations [ROAMREV]. and mandatory/non-mandatory attribute-value pairs (AVPs) (Section 4. Standards Track [Page 8] . since RADIUS does not provide explicit support for proxies. defined the Network Access Identifier (NAI) [NAI].3). Diameter also provides support for the following: Capability negotiation RADIUS does not support error messages. or a mandatory/non-mandatory flag for attributes. RADIUSbased roaming is vulnerable to attack from external parties as well as susceptible to fraud perpetrated by the roaming partners themselves. the capabilities of Network Access Server (NAS) devices have increased substantially.7 and 6). Peer discovery and configuration RADIUS implementations typically require that the name or address of servers or clients be manually configured. Diameter addresses these limitations and provides for secure and scalable roaming. even be aware of what service has been implemented. facilitating roaming between providers. Derivation of dynamic session keys is enabled via transmission-level security. [PROXYCHAIN] introduced the concept of proxy chaining via an intermediate server. By providing explicit support for inter-domain roaming and message routing (Sections 2. auditability [AAACMS]. which can result in major security vulnerabilities if the Request Authenticator is not globally and temporally unique as required in [RADIUS]. and transmission-layer security (Section 13) features. et al.RFC 3588 Diameter Based Protocol September 2003 In addition to addressing the above requirements. it is not suitable for wide-scale deployment on the Internet [PROXYCHAIN]. In the decade since AAA protocols were first introduced. and creates the temptation to reuse the RADIUS shared secret. along with the corresponding shared secrets. capability negotiation (Section 5. In order to improve scalability.1). This results in a large administrative burden. while others deliver data associated with particular applications that employ Diameter. It is also possible for the base protocol to be extended for use in new applications. or network access [NASREQ]. et al. proxying and redirecting of Diameter messages through a server hierarchy. Some of these AVP values are used by the Diameter protocol itself. Calhoun. At this time the focus of Diameter is network access and accounting applications. given improvements in processor speeds and the widespread availability of embedded IPsec and TLS implementations. or it may be used with a Diameter application. 1. Standards Track [Page 9] . AVPs are used by the base Diameter protocol to support the following required features: Transporting of user authentication information. it is imperative that the designers of new applications understand their requirements before using Diameter. between client and servers. for the purposes of enabling the Diameter server to authenticate the user.RFC 3588 Diameter Based Protocol September 2003 within embedded devices. as required by [AAAREQ]. so long as the required AVPs are included and AVPs that are explicitly excluded are not included. via the addition of new commands or AVPs.1. AVPs may be added arbitrarily to Diameter messages. Transporting of service specific authorization information. etc. The base protocol may be used by itself for accounting purposes only. capacity planning. Therefore. through addition of new commands and AVPs (required in [AAAREQ]). Exchanging resource usage information. Basic services necessary for applications. Relaying. such as handling of user sessions or accounting All data delivered by the protocol is in the form of an AVP. A truly generic AAA protocol used by many applications might provide functionality not provided by Diameter. - - - The Diameter base protocol provides the minimum requirements needed for a AAA protocol. such as Mobile IPv4 [DIAMMIP]. which MAY be used for accounting purposes. Diameter Protocol The Diameter base protocol provides the following facilities: Delivery of AVPs (attribute value pairs) Capabilities negotiation Error notification Extensibility. allowing the peers to decide whether a user’s access request should be granted. The Mobile IPv4 and the NASREQ documents describe applications that use this base specification for Authentication. The Diameter protocol also supports server-initiated messages. this document defines the base protocol specification for AAA. Description of the Document Set Currently. Diameter is a peerto-peer protocol. A Diameter node MAY act as an agent for certain requests while acting as a server for others. Standards Track [Page 10] . The Transport Profile document [AAATRANS] discusses transport layer issues that arise with AAA protocols and recommendations on how to overcome these issues.1. authorization. redirects and relay agents. A Diameter server performs authentication and/or authorization of the user. A Diameter client generates Diameter messages to request authentication. Any node can initiate a request. and accounting services for the user. such as a request to abort service to a particular user. which includes support for accounting.1. Consideration was given for servers that need to perform protocol conversion between Diameter and RADIUS.4 for more information on Diameter applications. In this document. agents include proxies. Transport Profile [AAATRANS] and applications: Mobile IPv4 [DIAMMIP]. the Diameter specification consists of a base specification (this document). In that sense. The NASREQ [NASREQ] application defines a Diameter Application that allows a Diameter server to be used in a PPP/SLIP Dial-Up and Terminal Server Access environment. A Diameter agent is a node that does not authenticate and/or authorize messages locally. such as a Network Access Server (NAS) or a Foreign Agent (FA). a Diameter Client is a device at the edge of the network that performs access control. In summary. Calhoun.RFC 3588 Diameter Based Protocol September 2003 See Section 2. This document also defines the Diameter failover algorithm and state machine. et al. and NASREQ [NASREQ]. 1. The Mobile IPv4 [DIAMMIP] application defines a Diameter application that allows a Diameter server to perform AAA functions for Mobile IPv4 services to a mobile node. Authorization and Accounting. it is recommended that a Grouped AVP be used (see Section 4.1). In order to allocate a new AVP value. Approach to Extensibility The Diameter protocol is designed to be extensible. It is expected that command codes are reused.2.1.4) or a vendor specific Application Identifier. including: Defining new AVP values Creating new AVPs Creating new authentication/authorization applications Creating new accounting applications Application authentication procedures Reuse of existing AVP values. new command codes can only be created by IETF Consensus (see Section 11.2.2.RFC 3588 Diameter Based Protocol September 2003 1. The request MUST include the commands that would make use of the AVP. a request MUST be sent to IANA. et al. In the event that a logical grouping of AVPs is necessary. Creating New Authentication Applications Every Diameter application specification MUST have an IANA assigned Application Identifier (see Section 2.3. AVPs and Diameter applications are strongly recommended.2. 1. 1. using several mechanisms. a request MUST be sent to IANA [IANA]. Standards Track [Page 11] .2.2. as opposed to creating new AVPs. In order to create a new AVP. Defining New AVP Values New applications should attempt to reuse AVPs defined in existing applications when possible. with a specification for the AVP. a new AVP should be created. an application may require a new value to communicate some service-specific information. Calhoun. Creating New AVPs The When no existing AVP can be used.4). Reuse simplifies standardization and implementation and avoids potential interoperability issues. new AVP being defined MUST use one of the data types listed in Section 4. and multiple "groups" are possible in a given command.2. IANA considerations for Diameter are discussed in Section 11. 1. along with an explanation of the new AVP value. For AVPs of type Enumerated. However. it MUST also specify the AVPs that are to be present in the Diameter Accounting messages (see Section 9. does not imply that a new accounting application id is required. - Creation of a new application should be viewed as a last resort. just because a new authentication application id is required.RFC 3588 Diameter Based Protocol September 2003 Should a new Diameter usage scenario find itself unable to fit within an existing application without requiring major changes to the specification. An implementation MAY add arbitrary non-mandatory AVPs to any command defined in an application. which have the "M" bit set. Such services need to define the AVPs carried in the Accounting-Request (ACR)/ Accounting-Answer (ACA) messages. 1. et al. The expected AVPs MUST be defined in an ABNF [ABNF] grammar (see Section 3.4. Adding support for an authentication method requiring definition of new AVPs for use with the application. Diameter applications MUST define one Command Code. Please refer to Section 11. it may be desirable to create a new Diameter application. but new application bar has a command that requires two round trips to complete).2. in order to avoid defining multiple AVPs that carry similar information.3). or add new mandatory AVPs to the ABNF. Creating New Accounting Applications There are services that only require Diameter accounting. If the Diameter application has accounting requirements. Major changes to an application include: Adding new AVPs to the command. Standards Track [Page 12] ..1 for details. application foo has a command that requires one round trip. In order to justify allocation of a new application identifier. but do not need to define new command codes. including vendor-specific AVPs without needing to define a new application. When possible. a new Diameter application SHOULD reuse existing Diameter AVPs. An implementation MAY add arbitrary non-mandatory AVPs (AVPs with the "M" bit not set) to any command defined in an Calhoun.2). Since a new EAP authentication method can be supported within Diameter without requiring new AVPs. addition of EAP methods does not require the creation of a new authentication application. Requiring a command that has a different number of round trips to satisfy a request (e.g.1. when the AVP is included within an accounting command..g.RFC 3588 Diameter Based Protocol September 2003 application... Every Diameter accounting application specification MUST have an IANA assigned Application Identifier (see Section 2. as long as no new mandatory AVPs are added. Within an accounting command. it MUST NOT have the "M" bit set. application defined state machine). billing server) or the accounting server itself MUST understand the AVP in order to compute a correct bill.4) or a vendor specific Application Identifier. regardless of whether it is required or optional within the ABNF for the accounting application. The combination of the home domain and the accounting application Id can be used in order to route the request to the appropriate accounting server. Every Diameter implementation MUST support accounting. Basic accounting support is sufficient to handle any application that uses the ACR/ACA commands defined in this document. Standards Track [Page 13] . in order to avoid defining multiple AVPs that carry similar information.1. then the base protocol defined standard accounting application Id (Section 2. Application Identifiers are still required for Diameter capability exchange. without needing to define a new accounting application. A mandatory AVP is defined as one which has the "M" bit set when sent within an accounting command. The creation of a new accounting application should be viewed as a last resort and MUST NOT be used unless a new command or additional mechanisms (e. even if the "M" bit is set when the same AVP is used within other Diameter commands (i. new commands or additional mechanisms (e. Calhoun.e.4) MUST be used in ACR/ACA commands.g. or new mandatory AVPs are added to the ABNF.g. a new Diameter accounting application SHOULD attempt to reuse existing AVPs..1 for details. If the base accounting is used without any mandatory AVPs. A DIAMETER base accounting implementation MUST be configurable to advertise supported accounting applications in order to prevent the accounting server from accepting accounting requests for unbillable services. application defined state machine) is defined within the application. setting the "M" bit implies that a backend server (e. Please refer to Section 11. including vendor-specific AVPs. When possible. et al. authentication/authorization commands). If the AVP is not relevant to the billing process. Application Authentication Procedures When possible. et al. Authorization The act of determining whether a requesting entity (subject) will be allowed access to a resource (object). An AVP includes a header and is used to encapsulate protocol-specific data (e. proxy or redirect agent. 1. Accounting servers creating the accounting record may do so by processing interim accounting events or accounting events from several devices serving the same user. Broker A broker A broker operated a broker agents. SHOULD be used. applications SHOULD be designed such that new authentication methods MAY be added without requiring changes to the application. such as Extensible Authentication Protocol [EAP]. authentication frameworks. AAA Authentication. auditing. This MAY require that new AVP values be assigned to represent the new authentication transform. Accounting Record An accounting record represents a summary of the resource consumption of a user over the entire session. and MAY be by roaming consortiums. Accounting The act of collecting information on resource usage for the purpose of capacity planning. When possible. Authorization and Accounting. AVP The Diameter protocol consists of a header followed by one or more Attribute-Value-Pairs (AVPs). or any other scheme that produces similar results. Depending on the business model. authorization or accounting information. Terminology is a business term commonly used in AAA infrastructures.5.g.2.RFC 3588 Diameter Based Protocol September 2003 1. may either choose to deploy relay agents or proxy Calhoun.3. billing or cost allocation. is either a relay.. Authentication The act of verifying the identity of an entity (subject). routing information) as well as authentication. Standards Track [Page 14] . et al. Agent or Server. End-to-end security is security between two Diameter nodes. proxy. Home Realm A Home Realm is the administrative domain with which the user maintains an account relationship.RFC 3588 Diameter Based Protocol September 2003 Diameter Agent A Diameter Agent is a Diameter node that provides either relay. redirect or translation services. this hop-by-hop security does not protect the entire Diameter user session. Calhoun. This security protects the entire Diameter communications path from the originating Diameter node to the terminating Diameter node. its very nature. Diameter Peer A Diameter Peer is a Diameter Node to which a given Diameter Node has a direct transport connection. a Diameter Server MUST support Diameter applications in addition to the base protocol. Diameter Node A Diameter node is a host process that implements the Diameter protocol. An example of a Diameter client is a Network Access Server (NAS) or a Foreign Agent (FA). Diameter Server A Diameter Server is one that handles authentication. or security across a transport connection. When relays or proxy are involved. authorization and accounting requests for a particular realm. Diameter Client A Diameter Client is a device at the edge of the network that performs access control. Standards Track [Page 15] . and acts either as a Client. Home Server See Diameter Server. By Downstream Downstream is used to identify the direction of a particular Diameter message from the home server towards the access device. End-to-End Security TLS and IPsec provide hop-by-hop security. Diameter Security Exchange A Diameter Security Exchange is a process through which two Diameter nodes establish end-to-end security. possibly communicating through Diameter Agents. Diameter makes use of the realm. An administrative domain MAY act as a local realm for certain users. is used in the Diameter protocol to extract a user’s identity and realm. and are piggybacked on the administration of the DNS namespace. they may originate Reject messages in cases where policies are violated. It is typically implemented in order to provide for partial accounting of a user’s session in the case of a device reboot or other network problem prevents the reception of a session summary message or session record. or NAI [NAI]. to determine whether messages can be satisfied locally. Realm The string in the NAI that immediately follows the ’@’ character. Multi-session A multi-session represents Multi-sessions are tracked example of a multi-session leg of the bundle would be be a multi-session. As a result. An would be a Multi-link PPP bundle. while being a home realm for others. In RADIUS. or whether they must be routed or redirected.RFC 3588 Diameter Based Protocol September 2003 Interim accounting An interim accounting message provides a snapshot of usage during a user’s session. Standards Track [Page 16] . proxies need to understand the semantics of the messages passing through them. and may not support all Diameter applications. This is typically accomplished by tracking the state of NAS devices. The identity is used to identify the user during authentication and/or authorization. by using the Acct-Multi-Session-Id. while the realm is used for message routing purposes. NAI realm names are required to be unique. et al. Proxy Agent or Proxy In addition to forwarding requests and responses. also loosely referred to as domain. proxies make policy decisions relating to resource usage and provisioning. realm names are not necessarily piggybacked on the DNS namespace but may be independent of it. Local Realm A local realm is the administrative domain providing services to a user. a logical linking of several sessions. Each a session while the entire bundle would Network Access Identifier The Network Access Identifier. While proxies typically do not respond to client Requests prior to receiving a Response from the server. Calhoun. Roaming Relationships Roaming relationships include relationships between companies and ISPs. As a result. et al. while acting as relay or proxy agents for other types. and are capable of handling any Diameter application or message type. they do not alter any AVPs transiting between client and server. All Diameter packets with the same Session-Identifier are considered to be part of the same session. Calhoun. they do not examine or alter non-routing AVPs. redirect agents do not keep state with respect to sessions or NAS resources. Since redirect agents do not sit in the forwarding path. although they may be configured only to redirect messages of certain types. Time constraints are typically imposed in order to limit financial risk.RFC 3588 Diameter Based Protocol September 2003 Real-time Accounting Real-time accounting involves the processing of information on resource usage within a defined time window. Since relays make decisions based on information in routing AVPs and realm forwarding tables they do not keep state on NAS resource usage or sessions in progress. relays never originate messages. Session A session is a related progression of events devoted to a particular activity. Security Association A security association is an association between two endpoints in a Diameter session which allows the endpoints to communicate with integrity and confidentially. Standards Track [Page 17] . even in the presence of relays and/or proxies. As with proxy agents. Each application SHOULD provide guidelines as to when a session begins and ends. do not need to understand the semantics of messages or non-routing AVPs. Since relays do not make policy decisions. and relationships between an ISP and a roaming consortium. Redirect agents do not originate messages and are capable of handling any message type. Relay Agent or Relay Relays forward requests and responses based on routing-related AVPs and realm routing table entries. relationships among peer ISPs within a roaming consortium. Redirect Agent Rather than forwarding requests and responses between clients and servers. redirect agents refer clients to servers and allow them to communicate directly. Transaction state implies that upon forwarding a request. simultaneous voice and data transfer during the same session) or serially. but for use in authentication and authorization it is always extended for a particular application. otherwise known as a Peerto-Peer Connection. which is used for failover purposes. which is restored to its original value when the corresponding answer is received. the Hop-by-Hop identifier is saved. These services may happen concurrently (e. such as RADIUS. 2. or by expiration. and its state is considered active either until it is notified otherwise. Standards Track [Page 18] . by keeping track of all authorized active sessions. The request’s state is released upon receipt of the answer. These changes in sessions are tracked with the Accounting-Sub-Session-Id. Upstream Upstream is used to identify the direction of a particular Diameter message from the access device towards the home server. Calhoun. QoS or data characteristics) provided to a given session. Sub-session A sub-session represents a distinct service (e.g. User The entity requesting or using some resource. in support of which a Diameter client has generated a request.. A stateless agent is one that only maintains transaction state. Transport Connection A transport connection is a TCP or SCTP connection existing directly between two Diameter peers. Two Diameter applications are defined by companion documents: NASREQ [NASREQ]. Each authorized session is bound to a particular service. et al. Transaction state The Diameter protocol requires that agents maintain transaction state.RFC 3588 Diameter Based Protocol September 2003 Session state A stateful agent is one that maintains session state information. Protocol Overview The base Diameter protocol may be used by itself for accounting applications.g. Translation Agent A translation agent is a stateful Diameter node that performs protocol translation between Diameter and another AAA protocol. the field is replaced with a locally unique identifier.. In addition. and not a "Diameter Client". which includes accounting. e. The base protocol also defines certain rules that apply to all exchanges of messages between Diameter nodes. they MUST fully support each Diameter application that is needed to implement the client’s service. e. et al. The base Diameter protocol concerns itself with capabilities negotiation. A Diameter Client that does not support both NASREQ and Mobile IPv4. protocol transparent. In addition. NASREQ and/or Mobile IPv4. and MUST transparently support the Diameter base protocol. NASREQ and/or Mobile IPv4. The Session-Id is then used in all subsequent messages to identify the user’s session (see Section 8 for more information). Additional Diameter applications MAY be defined in the future (see Section 11.RFC 3588 Diameter Based Protocol September 2003 Mobile IPv4 [DIAMMIP]. MUST be referred to as "Diameter X Proxy" where X is the application which it supports. which includes accounting.g. The set of AVPs included in the message is determined by a particular Diameter application. In addition.. Standards Track [Page 19] . Diameter proxies MUST support the base protocol.g. Diameter Relays and redirect agents are. which includes accounting. or reject it by returning an answer message with the Result-Code AVP Calhoun. A Diameter Server that does not support both NASREQ and Mobile IPv4. and not a "Diameter Proxy". A Diameter proxy which does not support also both NASREQ and Mobile IPv4. they MUST fully support each Diameter application that is needed to implement the intended service. The initial request for authentication and/or authorization of a user would include the Session-Id. and not a "Diameter Server". e.3). Diameter Servers MUST support the base protocol. by definition... which includes accounting.g. MUST be referred to as "Diameter X Client" where X is the application which it supports. how messages are sent and how peers may eventually be abandoned. MUST be referred to as "Diameter X Server" where X is the application which it supports. they MUST fully support each Diameter application that is needed to implement proxied services. One AVP that is included to reference a user’s session is the Session-Id. These applications are introduced in this document but specified elsewhere. The communicating party may accept the request. Diameter Clients MUST support the base protocol. and all Diameter applications. Communication between Diameter peers begins with one peer sending a message to another Diameter peer. NASREQ and/or Mobile IPv4. 2 for more information on peer discovery. The specific behavior of the Diameter server or client receiving a request depends on the Diameter application employed. subject to security policy on trusting such messages. unless multiple instances exist on the peer in which case a separate connection per process is allowed. et al.1. The base Diameter protocol is run on port 3868 of both TCP [TCP] and SCTP [SCTP] transport protocols. Session state (associated with a Session-Id) MUST be freed upon receipt of the Session-Termination-Request. Diameter clients MUST support either TCP or SCTP. When no transport connection exists with a peer. expiration of authorized service time in the Session-Timeout AVP. 2. See Section 5. and according to rules established in a particular Diameter application. Future versions of this specification MAY mandate that clients support SCTP. while agents and servers MUST support both. and MUST be prepared to receive connections on port 3868. an attempt to connect SHOULD be periodically made. SCTP SHOULD be tried first. This behavior is handled via the Tc timer. whose recommended value is 30 seconds. When connecting to a peer and either zero or more transports are specified. Diameter implementations SHOULD also be able to interpret a reset from the transport and timed-out connection attempts. such as when a peer has terminated the transport connection stating that it does not wish to communicate. Transport Transport profile is defined in [AAATRANS].RFC 3588 Diameter Based Protocol September 2003 set to indicate an error occurred. Diameter implementations SHOULD be able to interpret ICMP protocol port unreachable messages as explicit indications that the server is not reachable. Standards Track [Page 20] . Calhoun. A given Diameter instance of the peer state machine MUST NOT use more than one transport connection to communicate with a given peer. Session-TerminationAnswer. A Diameter node MAY initiate connections from a source port other than the one that it declares it accepts incoming connections on. followed by TCP. There are certain exceptions to this rule. 1. 2. including vendor-specific AVPs. See Sections 13. This also eases the requirements on the NAS support certificates. et al.1. SCTP Guidelines The following are guidelines for Diameter implementations that support SCTP: 1. described in the specification.1 and 13. Securing Diameter Messages Diameter clients. For a given application. It is suggested that IPsec can be used primarily at the edges intra-domain traffic. It is also suggested that inter-domain would primarily use TLS. such as Network Access Servers (NASes) and Mobility Agents MUST support IP Security [SECARCH]. The transport connection MUST be closed using a RESET call (send a TCP RST bit) or an SCTP ABORT message (graceful closure is compromised). the stream is compromised and cannot be recovered.2. 2. The Diameter protocol MUST NOT be used without any security mechanism (TLS or IPsec). such as using pre-shared keys between a local AAA proxy.2 for more on IPsec and TLS usage. Calhoun.3. and MAY support TLS [TLS]. For interoperability: All Diameter nodes MUST be prepared to receive Diameter messages on any SCTP stream in the association. and the AVPs specified in the associated ABNFs. 2.3). An implementation MAY add arbitrary non-mandatory AVPs to any command defined in an application. Diameter Application Compliance and in NAS a to traffic details Application Identifiers are advertised during the capabilities exchange phase (see Section 5. Standards Track [Page 21] . To prevent blocking: All Diameter nodes SHOULD utilize all SCTP streams available to the association to prevent head-of-the-line blocking.RFC 3588 Diameter Based Protocol September 2003 If Diameter receives data up from TCP that cannot be parsed or identified as a Diameter error made by the peer. Diameter servers MUST support TLS and IPsec. Please refer to Section 11. 2.1.1 for details. advertising support of an application implies that the sender supports all command codes. If none can be found.5. The receiver of a Capabilities Exchange message advertising Relay service MUST assume that the sender supports all current and future applications. Diameter relay and proxy agents are responsible for finding an upstream server that supports the application of a particular message. and is shared between an access device and a server. while all other Diameter nodes MUST advertise locally supported applications. A session is a logical concept at the application layer. Sessions This section attempts to provide the reader with an understanding of the difference between connection and session. Application Identifiers Each Diameter application MUST have an IANA assigned Application Identifier (see Section 11. Connections vs. 2.3). Diameter nodes inform their peers of locally supported applications. an error message is returned with the Result-Code AVP set to DIAMETER_UNABLE_TO_DELIVER. used to send and receive Diameter messages. Standards Track [Page 22] . all Diameter messages contain an Application Identifier.RFC 3588 Diameter Based Protocol September 2003 2. The base protocol does not require an Application Identifier since its support is mandatory. which are terms used extensively throughout this document. and is identified via the Session-Id AVP Calhoun. which is used in the message forwarding process. A connection is a transport level connection between two peers. Furthermore.4. et al. The following Application Identifier values are defined: Diameter Common Messages NASREQ Mobile-IP Diameter Base Accounting Relay 0 1 [NASREQ] 2 [DIAMMIP] 3 0xffffffff Relay and redirect agents MUST advertise the Relay Application Identifier. During the capabilities] This is known as the Realm Routing Table. Each authorized session is bound to a particular service.7. Calhoun. et al. Standards Track [Page 26] . This routing decision is performed using a list of supported realms. which is communicated by Diameter servers via the Session-Timeout AVP. Destination-Realm).8. which is restored to its original value when the corresponding answer is received. 2. Transaction state implies that upon forwarding a request.g. which is used for failover purposes. A complex network will have multiple authentication sources. by keeping track of all authorized active sessions.1. The Proxy-Info AVP allows stateless agents to add local state to a Diameter request. its Hop-by-Hop identifier is saved. Each authorized session has an expiration. The request’s state is released upon receipt of the answer. Maintaining session state MAY be useful in certain applications. the field is replaced with a locally unique identifier. such as: Protocol translation (e. Relay Agents Relay Agents are Diameter agents that accept requests and route messages to other Diameter nodes based on information found in the messages (e.RFC 3588 Diameter Based Protocol September 2003 - They can be used for load balancing. and as another type of agent for others.. and its state is considered active either until it is notified otherwise.. A stateful agent is one that maintains session state information. as is defined further in Section 2. they can sort requests and forward towards the correct target. the protocol’s failover procedures require that agents maintain a copy of pending requests. A stateless agent is one that only maintains transaction state. However. with the guarantee that the same state will be present in the answer. The Diameter protocol requires that agents maintain transaction state. A Diameter implementation MAY act as one type of agent for some requests. RADIUS <-> Diameter) Limiting resources authorized to a particular user Per user or transaction auditing A Diameter agent MAY act in a stateful manner for some requests and be stateless for others. or by expiration. and known peers.g. com. NAS performs a Diameter route lookup. and determines that the message is to be relayed to DRL. Calhoun. et al. access devices) to enforce resource usage. but do not modify any other portion of a message.2. Relays modify Diameter messages by inserting and removing routing information. 2. and provisioning. and relays the message to HMS. processes the authentication and/or authorization request. changed or deleted.net ---------> 1. which is routed back to NAS using saved transaction state. they differ since they modify messages to implement policy enforcement. +------+ | | | NAS | | | +------+ example.RFC 3588 Diameter Based Protocol September 2003 Relays MAY be used to aggregate requests from multiple Network Access Servers (NASes) within a common geographical area (POP). This requires that proxies maintain the state of their downstream peers (e. However. which is example. HMS identifies that the request can be locally supported (via the realm). Since Relays do not perform any application level processing. Relays SHOULD NOT maintain session state but MUST maintain transaction state. and therefore MUST advertise the Relay Application Identifier. Request 3. Request 4.net ---------> 2. DRL performs the same route lookup as NAS. which is a Diameter Relay.g. using "example. and replies with an answer.com Figure 2: Relaying of Diameter messages The example provided in Figure 2 depicts a request issued from NAS. for the user bob@example. provide admission control. Proxy Agents Similarly to relays.8. proxy agents route Diameter messages using the Diameter Routing Table.com" as the key. The use of Relays is advantageous since it eliminates the need for NASes to be configured with the necessary security information they would otherwise require to communicate with Diameter servers in other realms. Standards Track [Page 27] . Answer <--------+------+ | | | HMS | | | +------+ example.com’s Home Diameter Server. Prior to issuing the request. they provide relaying services for all Diameter applications. which is an access device.. this reduces the configuration load on Diameter servers that would otherwise be necessary when NASes are added. Answer <--------+------+ | | | DRL | | | +------+ example. Likewise. they can monitor the number and types of ports in use. which is a redirect agent that returns a redirect notification to DRL. The message is forwarded by the NAS to its relay. DRL. they are not required to maintain transaction state. An example is a redirect agent that provides services to all members of a consortium. Proxies MUST only advertise the Diameter applications they support.RFC 3588 Diameter Based Protocol September 2003 It is important to note that although proxies MAY provide a value-add function for NASes. Standards Track [Page 28] . they cannot maintain session state. and forwards the request to it. Proxies that wish to limit resources MUST maintain session state. which does not have a routing entry in its Diameter Routing Table for example. Upon receipt of the redirect notification.com. and only return an answer with the information necessary for Diameter agents to communicate directly. DRL establishes a transport connection with HMS. Proxies MAY be used in call control centers or access ISPs that provide outsourced connections. The example provided in Figure 3 depicts a request issued from the access device. as well as HMS’ contact information. Since redirect agents do not receive answer messages. they do not allow access devices to use end-toend security. This scenario is advantageous since it does not require that the consortium provide routing updates to its members when changes are made to a member’s infrastructure.3. DRL has a default route configured to DRD. for the user bob@example. et al.com. All proxies MUST maintain transaction state. and make allocation and admission decisions according to their configuration. 2. since modifying messages breaks authentication. Since redirect agents do not relay messages. Calhoun. Further. Since enforcing policies requires an understanding of the service being provided. Redirect Agents Redirect agents are useful in scenarios where the Diameter routing configuration needs to be centralized.8. if one doesn’t already exist. since redirect agents never relay requests. NAS. but does not wish to be burdened with relaying all messages between realms. they do not modify messages. Request | | 3. Standards Track [Page 29] . Redirection | | Notification | v ---------> +------+ ---------> 1.8. et al. +------+ ---------> | | RADIUS Request | NAS | | | RADIUS Answer +------+ <--------example.net +------+ ---------> +------+ | | Diameter Request | | | TLA | | HMS | | | Diameter Answer | | +------+ <--------+------+ example. 2. TACACS+<->Diameter). and therefore MUST advertise the Relay Application Identifier. they provide relaying services for all Diameter applications. RADIUS<->Diameter..com Figure 4: Translation of RADIUS to Diameter Calhoun. Request | | 4. Translation Agents A translation agent is a device that provides translation between two protocols (e.net +------+ | | | DRD | | | +------+ ^ | 2.net +------+ | | | HMS | | | +------+ example. translation agents MUST be session stateful and MUST maintain transaction state.com Figure 3: Redirecting a Diameter Message Since redirect agents do not perform any application level processing. Given that the Diameter protocol introduces the concept of long-lived authorized sessions.g. Translation agents are likely to be used as aggregation servers to communicate with a Diameter infrastructure. and therefore translation agents MUST only advertise their locally supported applications.net example. Answer | | 5.RFC 3588 Diameter Based Protocol September 2003 +------+ | | | NAS | | | +------+ example. Translation of messages can only occur if the agent recognizes the application of a particular request. Request | DRL | 6. while allowing for the embedded systems to be migrated at a slower pace. Answer <--------+------+ <--------example.4. Before initiating a connection. where TLS or IPsec transmissionlevel security is sufficient. For example. communicating through agents. Use end-to-end security on messages containing sensitive AVPs. In addition to authenticating each connection.9. which are not the subject of standardization. Diameter Path Authorization As noted in Section 2. These services are provided by supporting AVP integrity and confidentiality between two peers.2. Always use end-to-end security. Which AVPs are sensitive is determined by service provider policy. there may be no need for end-to-end security. End-to-End Security Framework End-to-end security services include confidentiality and message origin authentication.10. - It is strongly RECOMMENDED that all Diameter implementations support end-to-end security. Any AVP for which the P bit may be set or which may be encrypted may be considered sensitive. described in [AAACMS]. Therefore. Calhoun. each connection as well as the entire session MUST also be authorized. Diameter requires transmission level security to be used on each connection (TLS or IPsec). a Diameter Peer MUST check that its peers are authorized to act in their roles. End-to-end security is provided via the End-to-End security extension. The circumstances requiring the use of end-to-end security are determined by policy on each of the peers. et al. a Diameter peer may be authentic. Accounting AVPs may be considered sensitive. Standards Track [Page 30] . 2. For example. may be applied by next hop Diameter peer or by destination realm. End-to-end security policies include: Never use end-to-end security. replay and integrity protected and confidential on a per-packet basis.RFC 3588 Diameter Based Protocol September 2003 2. each connection is authenticated. but that does not mean that it is authorized to act as a Diameter Server advertising a set of Diameter applications. Security policies. AVPs containing keys and passwords should be considered sensitive. For example. the local Diameter agent. As noted in Section 6. Standards Track [Page 31] . By authorizing a request. administrators within the home realm may not wish to honor requests that have been routed through an untrusted realm. then a DIAMETER_UNABLE_TO_COMPLY error message MUST be sent within the accounting request. for example. Diameter capabilities negotiation (CER/CEA) also MUST be carried out. in order to determine what Diameter applications are supported by each peer. Accounting requests without corresponding authorization responses SHOULD be subjected to further scrutiny. The AVP contains the identity of the peer the request was received from. If the service cannot be provided by the local realm. on receiving a Diameter response authorizing a session. a relay or proxy agent MUST append a Route-Record AVP to all requests forwarded. By issuing an accounting request corresponding to the authorization response. a Diameter client receiving an authorization response for a service that it cannot perform MUST NOT substitute an alternate service. At each step. forwarding of an authorization response is considered evidence of a willingness to take on financial risk relative to the session. MUST check the Route-Record AVPs to make sure that the route traversed by the response is acceptable. and then send accounting requests for the alternate service instead.8. by establishing credit limits for intermediate realms and refusing to accept responses which would violate those limits. A DIAMETER_AUTHORIZATION_REJECTED error message (see Section 7. as should accounting requests indicating a difference between the requested and provided service. The home Diameter server. et al. authorization checks are performed at each connection along the path. prior to authorizing a session. A home realm may also wish to check that each accounting request message corresponds to a Diameter response authorizing the session. the home Diameter server is implicitly indicating its willingness to engage in the business transaction as specified by the contractual relationship between the server and the previous hop.RFC 3588 Diameter Based Protocol September 2003 Prior to bringing up a connection. Calhoun. the local realm implicitly indicates its agreement to provide the service indicated in the authorization response.1.1. A local realm may wish to limit this exposure. Similarly.5) is sent if the route traversed by the request is unacceptable. Diameter sessions MUST be routed only through authorized nodes that have advertised support for the Diameter application required by the session. MUST check the Route-Record AVPs to make sure that the route traversed by the request is acceptable. If set. +-+-+-+-+-+-+-+-+-+-+-+-+Version This Version field MUST be set to 1 to indicate Diameter Version 1. the message is a request. . If cleared.RFC 3588 Diameter Based Protocol September 2003 3. the message MUST be locally processed. E(rror) . are transmitted in network byte order. assigned: 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |R P E T r r r r| +-+-+-+-+-+-+-+-+ R(equest) . and the message will not conform to the ABNF described for this command. Diameter Header A summary of the Diameter header format is shown below. If cleared. Command Flags The Command Flags field is eight bits.If set. the message MAY be proxied. the message contains a protocol error.If set. the message is an answer.. Message Length The Message Length field is three octets and indicates the length of the Diameter message including the header fields. relayed or redirected. Standards Track [Page 32] . P(roxiable) . Messages with the ’E’ The following bits are Calhoun.. et al. g. and is used in order to communicate the command associated with the message. It can be set only in cases where no answer has been received from the server for a request and the request is sent again..2. a protocol error) has been received for the earlier message. This bit MUST NOT be set in request messages. Command-Code values 16. otherwise the sender MUST set this flag. Diameter agents that receive a request with the T flag set. Diameter agents only need to be concerned about the number of requests they send based on a single received request. as an indication of a possible duplicate due to a link failure. r(eserved) .This flag is set after a link failover procedure. Application-ID Application-ID is four octets and is used to identify to which application the message is applicable for. Calhoun.2.214 and 16. and ignored by the receiver. Standards Track [Page 33] . See Section 7. This flag MUST NOT be set in answer messages.3 for the possible values that the application-id may use. It is set when resending requests not yet acknowledged. The application-id in the header MUST be the same as what is contained in any relevant AVPs contained in the message. retransmissions by other entities need not be tracked.1). This flag MUST NOT be set if an error answer message (e. et al. Command-Code The Command-Code field is three octets.215 (hexadecimal values FFFFFE -FFFFFF) are reserved for experimental use (See Section 11.3). The 24-bit address space is managed by IANA (see Section 11. MUST keep the T flag set in the forwarded request. See Section 11. to aid the removal of duplicate requests. The application can be an authentication application. an accounting application or a vendor specific application.RFC 3588 Diameter Based Protocol September 2003 bit set are commonly referred to as error messages.777. T(Potentially re-transmitted message) .these flag bits are reserved for future use. and MUST be set to zero.777. This bit MUST be cleared when sending a request for the first time. even across reboots. The sender of an Answer message MUST ensure that the Hop-by-Hop Identifier field contains the same value that was found in the corresponding request. Duplicate answer messages that are to be locally consumed (see Section 6. and MAY attempt to ensure that the number is unique across reboots. The sender MUST ensure that the Hop-by-Hop identifier in a request is unique on a given connection at any given time. An answer message that is received with an unknown Hop-by-Hop Identifier MUST be discarded. Senders of request messages MUST insert a unique identifier on each message. and MUST NOT affect any state that was set when the original request was processed. The identifier MUST remain locally unique for a period of at least 4 minutes. Duplicate requests SHOULD cause the same answer to be transmitted (modulo the hop-by-hop Identifier field and any routing AVPs that may be present).2) SHOULD be silently discarded.. The End-to-End Identifier MUST NOT be modified by Diameter agents of any kind. Calhoun. AVPs AVPs are a method of encapsulating information relevant to the Diameter message. and the low order 20 bits to a random value. The originator of an Answer message MUST ensure that the End-to-End Identifier field contains the same value that was found in the corresponding request. The combination of the Origin-Host (see Section 6. et al. The Hop-by-Hop identifier is normally a monotonically increasing number. Standards Track [Page 34] . whose start value was randomly generated.3) and this field is used to detect duplicates. See Section 4 for more information on AVPs. End-to-End Identifier The End-to-End Identifier is an unsigned 32-bit integer field (in network byte order) and is used to detect duplicate messages. 7.1 Request Capabilities-ExchangeCEA 257 5.1 Re-Auth-Answer RAA 258 8.2 Capabilities-ExchangeCER 257 5. which is used to determine the action that is to be taken for a particular message.3.5.5.5.2 Answer Calhoun.1.2 Re-Auth-Request RAR 258 8.3.1 Request Session-TerminationSTA 275 8. and the sub-type (i.7.. Standards Track [Page 35] .2 Session-TerminationSTR 275 8.1 Disconnect-Peer-Answer DPA 282 5.2 Disconnect-Peer-Request DPR 282 5.RFC 3588 Diameter Based Protocol September 2003 3. request or answer) is identified via the ’R’ bit in the Command Flags field of the Diameter header.4.4. The following Command Codes are defined in the Diameter base protocol: Command-Name Abbrev.2 Accounting-Request ACR 271 9. Code Reference -------------------------------------------------------Abort-Session-Request ASR 274 8.4.e.1 Accounting-Answer ACA 271 9.1 Abort-Session-Answer ASA 274 8.4.2 Answer Device-Watchdog-Request DWR 280 5.3. et al. Command Codes Each command Request/Answer pair is assigned a command code.3. Every Diameter message MUST contain a command code in its header’s Command-Code field.5.1 Device-Watchdog-Answer DWA 280 5. indicating that the message . Flags is set. anywhere in the message. PXY" . the ’P’ bit in the Command . ERR" . is a request. indicating that the message . the ’R’ bit in the Command . message contains a Result-Code AVP in . Flags is set. as opposed to an answer. is proxiable. = ". If present. Defines the fixed position of an AVP = [qual] "{" avp-spec "}" . the "protocol error" class. Standards Track [Page 36] . et al. Flags is set. Command Code ABNF specification Every Command Code defined MUST include a corresponding ABNF specification. application-id command-id r-bit p-bit e-bit fixed required Calhoun. which is used to define the AVPs that MUST or MAY be present. = [qual] "<" avp-spec ">" . The Command Code assigned to the" = 1*DIGIT = 1*DIGIT .RFC 3588 Diameter Based Protocol September 2003 3. indicating that the answer . The AVP MUST be present and can appear . or optional . AVP Name. RFC 2234 Section 6. .RFC 3588 Diameter Based Protocol September 2003 optional = [qual] "[" avp-name "]" . = 1*DIGIT . rule. The absence of any qualifiers depends on whether . above). = diameter-name . = avp-spec / "AVP" . be present. The minimum number of times the element may . which does not conflict with the . . The avp-name in the ’optional’ rule cannot . be present. defined . optional fixed rules (such as an optional . than in ABNF (see the optional rule. To do this. in the base or extended Diameter . present. If an optional rule has no . appear anywhere in the message. The avp-spec has to be an AVP Name. The default value is infinity.6. ICV at the end). then exactly one such AVP MUST . evaluate to any AVP Name which is included . The default value is zero. qualifier. qual min max A avp-spec avp-name Calhoun. the convention . . et al. The string "AVP" stands for *any* arbitrary . is ’0*1fixed’. present. The AVP can . Standards Track [Page 37] . . specifications. . in a fixed or required rule. = 1*DIGIT . If a fixed or required rule has no . NOTE: "[" and "]" have a different meaning . These braces cannot be used to express . value of zero implies the AVP MUST NOT be . The maximum number of times the element may . be present. required or fixed position AVPs defined in . = [min] "*" [max] . See ABNF conventions. the command code definition. it precedes a fixed. then 0 or 1 such AVP may be . required. qualifier. routing and security information as well as configuration details for the request and reply. while the acronyms are STR and STA. Each English word is delimited by a hyphen. MAY also be included in answer messages. but provides information about a Diameter peer that is able to satisfy the request. REQ. The command name is Session-Terminate-Request and Session-Terminate-Answer. and is specified in each case by the AVP description. - Additional information. Some AVPs MAY be listed more than once. The request is identified by the R(equest) bit in the Diameter header set to one (1). The effect of such an AVP is specific. et al. to ask that a particular action be performed. Diameter AVPs Diameter AVPs carry specific authentication. Standards Track [Page 38] . which includes a result code that communicates one of the following: The request was successful The request failed An additional request must be sent to provide information the peer requires prior to returning a successful or failed answer. Once the receiver has completed the request it issues the corresponding answer. "Diameter-Header: 9999999. Both the request and the answer for a given command share the same command code. A three-letter acronym for both the request and answer is also normally provided. respectively. encoded within AVPs. 4. The receiver could not process the request. An example is a message set used to terminate a session.RFC 3588 Diameter Based Protocol September 2003 The following is a definition of a fictitious command code: Example-Request ::= < { * { * [ 3. authorization. Calhoun. known as redirect. PXY > User-Name } Origin-Host } AVP Diameter Command Naming Conventions Diameter command names typically includes one or more English words followed by the verb Request or Answer. such as authorizing a user or terminating a session.3. accounting. server. indicates whether support of the AVP is required. The ’M’ Bit. AVP Header The The fields in the AVP header MUST be sent in network byte order.RFC 3588 Diameter Based Protocol September 2003 Each AVP of type OctetString MUST be padded to align on a 32-bit boundary. The length of the padding is not reflected in the AVP Length field. and an unrecognized bit SHOULD be considered an error. The ’r’ (reserved) bits are unused and SHOULD be set to 0. AVP numbers 1 through 255 are reserved for backward compatibility with RADIUS. Standards Track [Page 39] . +-+-+-+-+-+-+-+-+ AVP Code The AVP Code. Calhoun. known as the Mandatory bit. or translation agent and either the AVP or its value is unrecognized. the message MUST be rejected. AVP Flags The AVP Flags field informs the receiver how each attribute must be handled. combined with the Vendor-Id field. . identifies the attribute uniquely. A number of zerovalued bytes are added to the end of the AVP Data field till a word boundary is reached. which are allocated by IANA (see Section 11. Diameter Relay and redirect agents MUST NOT reject messages with unrecognized AVPs. The ’P’ bit indicates the need for encryption for end-to-end security. while other AVP types align naturally.. If an AVP with the ’M’ bit set is received by a Diameter client. without setting the Vendor-Id field. Note that subsequent Diameter applications MAY define additional bits within the AVP Header. AVP numbers 256 and above are used for Diameter.1.1).. proxy. 4. et al. the implementation may resend the message without the AVP. Diameter implementations are required to support all Mandatory AVPs which are allowed by the message’s formal syntax and defined either in the base Diameter standard or in one of the Diameter Application specifications governing the message. or per realm basis that would allow/prevent particular Mandatory AVPs to be sent. In order to preserve interoperability. Standards Track [Page 40] . Unless otherwise noted. AVPs with the ’M’ bit cleared are informational only and a receiver that receives a message with such an AVP that is not supported. AVP Length. The ’V’ bit. et al. MAY simply ignore the AVP. the message SHOULD be rejected.. When set the AVP Code belongs to the specific vendor code address space. AVPs will have the following default AVP Flags field settings: The ’M’ bit MUST be set. AVP Flags. Calhoun. indicates whether the optional Vendor-ID field is present in the AVP header. Vendor-ID field (if present) and the AVP data. 2) A configuration option may be provided on a system wide. Thus an administrator could change the configuration to avoid interoperability problems. If a message is received with an invalid attribute length. known as the Vendor-Specific bit. AVP Length The AVP Length field is three octets. It MAY do this in one of the following ways: 1) If a message is rejected because it contains a Mandatory AVP which is neither defined in the base Diameter standard nor in any of the Diameter Application specifications governing the message in which it appears. or whose value is not supported. per peer. The ’V’ bit MUST NOT be set.RFC 3588 Diameter Based Protocol September 2003 The ’M’ bit MUST be set according to the rules defined for the AVP containing it. possibly inserting additional standard AVPs instead. The AVP Length field MUST be set to 16 (20 if the ’V’ bit is enabled).1. The format of the Data field MUST be one of the following base data types or a data type derived from the base data types. The optional four-octet Vendor-ID field contains the IANA assigned "SMI Network Management Private Enterprise Codes" [ASSIGNNO] value. in network byte order. et al.2. Integer64 64 bit signed value. nor with future IETF applications. as managed by the IANA. The format and length of the Data field is determined by the AVP Code and AVP Length fields. OctetString The data contains arbitrary data of variable length. Calhoun. a new version of this RFC must be created.1. Since the absence of the vendor ID field implies that the AVP in question is not vendor specific. Integer32 32 bit signed value. guaranteeing that they will not collide with any other vendor’s vendor-specific AVP(s). AVP Values of this type that are not a multiple of four-octets in length is followed by the necessary padding so that the next AVP (if any) will start on a 32-bit boundary. Any vendor wishing to implement a vendor-specific Diameter AVP MUST use their own Vendor-ID along with their privately managed AVP address space. This field is only present if the respective bit-flag is enabled. Unless otherwise noted. implementations MUST NOT use the zero (0) vendor ID. in network byte order. Vendor-ID The Vendor-ID field is present if the ’V’ bit is set in the AVP Flags field. Standards Track [Page 41] . encoded in network byte order. Basic AVP Data Formats The Data field is zero or more octets and contains information specific to the Attribute. The AVP Length field MUST be set to 12 (16 if the ’V’ bit is enabled). 4.RFC 3588 Diameter Based Protocol September 2003 4. the AVP Length field MUST be set to at least 8 (12 if the ’V’ bit is enabled). A vendor ID value of zero (0) corresponds to the IETF adopted AVP values. Optional Header Elements The AVP Header contains one optional field. In the event that a new Basic AVP Data Format is needed. The below AVP Derived Data Formats are commonly used by applications. The first two octets of the Address Calhoun. The 32-bit value is transmitted in network byte order. Address The Address format is derived from the OctetString AVP Base Format. Float32 This represents floating point values of single precision as described by [FLOATPOINT]. The AVP Length field MUST be set to 12 (16 if the ’V’ bit is enabled). et al. Float64 This represents floating point values of double precision as described by [FLOATPOINT]. The AVP Length field MUST be set to 16 (20 if the ’V’ bit is enabled). An application that defines new AVP Derived Data Formats MUST include them in a section entitled "AVP Derived Data Formats".in the order in which they are specified including their headers and padding. Each new definition must be either defined or listed with a reference to the RFC that defines the format. Grouped The Data field is specified as a sequence of AVPs.3. in network byte order. Each of these AVPs follows . The AVP Length field MUST be set to 12 (16 if the ’V’ bit is enabled). applications may define data formats derived from the Basic AVP Data Formats. for example a 32-bit (IPv4) [IPV4] or 128-bit (IPv6) [IPV6] address. including their headers and padding. The 64-bit value is transmitted in network byte order. in network byte order. The AVP Length field MUST be set to 16 (20 if the ’V’ bit is enabled). most significant octet first. Standards Track [Page 42] . It is a discriminated union. Derived AVP Data Formats In addition to using the Basic AVP Data Formats. Unsigned64 64 bit unsigned value. representing. using the same format as the definitions below.RFC 3588 Diameter Based Protocol September 2003 Unsigned32 32 bit unsigned value. The AVP Length field is set to 8 (12 if the ’V’ bit is enabled) plus the total length of all included AVPs. 4. Thus the AVP length field of an AVP of type Grouped is always a multiple of 4. This represents the number of seconds since 0h on 1 January 1900 with respect to the Coordinated Universal Time (UTC). thus the length of an UTF8String in octets may be different from the number of characters encoded. For information encoded in 7-bit US-ASCII. MAY be provided. Calhoun. Time The Time format is derived from the OctetString AVP Base Format. encoded as an OctetString using the UTF-8 [UFT8] transformation format described in RFC 2279. the UTF-8 charset is identical to the US-ASCII charset. Standards Track [Page 43] .RFC 3588 Diameter Based Protocol September 2003 AVP represents the AddressType. 7 February 2036 the time value will overflow. The NTP Timestamp format is defined in chapter 3 of [SNTP]. an alternative means of entry and display. such as hexadecimal. The use of control codes SHOULD be avoided. The AddressType is used to discriminate the content and format of the remaining octets. implementations MUST be prepared to encounter any code point from 0x00000001 to 0x7fffffff. On 6h 28m 16s UTC. This is a human readable string represented using the ISO/IEC IS 10646-1 character set. Since additional code points are added by amendments to the 10646 standard from time to time. For code points not directly supported by user interface hardware or software. UTF-8 may require multiple bytes to represent a single character / code point. in the same format as the first four bytes are in the NTP timestamp format. UTF8String The UTF8String format is derived from the OctetString AVP Base Format. The string MUST contain four octets. et al. The use of leading or trailing white space SHOULD be avoided. Byte sequences that do not correspond to the valid encoding of a code point into UTF-8 charset or are outside this range are prohibited. the control code sequence CR LF SHOULD be used. SNTP [SNTP] describes a procedure to extend the time to 2104. Note that the AVP Length field of an UTF8String is measured in octets. not characters. When it is necessary to represent a new line. This procedure MUST be supported by all DIAMETER nodes. which contains an Address Family defined in [IANAADFAM]. transport . . DiameterURI The DiameterURI MUST follow the Uniform Resource Identifiers (URI) syntax [URI] rules specified below: "aaa://" FQDN [ port ] [ transport ] [ protocol ] . . = ". = Fully Qualified Host Name = ":" 1*DIGIT One of the ports used to listen for incoming connections. the default SCTP [SCTP] protocol is assumed. If multiple Diameter nodes run on the same host. and used as the only DiameterIdentity for that node. The contents of the string MUST be the FQDN of the Diameter node. the default Diameter port (3868) is assumed. . . Standards Track [Page 44] . If absent. whatever the connection it is sent on. . Transport security used FQDN port . If absent. each Diameter node MUST be assigned a unique DiameterIdentity. et al. Calhoun. UDP MUST NOT be used when the aaa-protocol field is set to diameter. . DiameterIdentity = FQDN DiameterIdentity value is used to uniquely identify a Diameter node for purposes of duplicate connection and routing loop detection. . If a Diameter node can be identified by several FQDNs. No transport security "aaas://" FQDN [ port ] [ transport ] [ protocol ] .RFC 3588 Diameter Based Protocol September 2003 DiameterIdentity The DiameterIdentity format is derived from the OctetString AVP Base Format. a single FQDN should be picked at startup.transport=" transport-protocol One of the transports used to listen for incoming connections. . . It uses the ASCII charset. the packet is dropped if the evaluated was a permit.RFC 3588 Diameter Based Protocol September 2003 transport-protocol = ( "tcp" / "sctp" / "udp" ) protocol = ".example.example.com:6666.com.transport=tcp aaa://host.protocol=diameter aaa://host.example.protocol=" aaa-protocol . et al.protocol=diameter aaa://host. (in or out) (possibly masked) (lists or ranges) the appropriate direction are evaluated in order.com:6666. the default AAA protocol . IPFilterRule The IPFilterRule format is derived from the OctetString AVP Base Format. Packets may be filtered based on the following information that is associated with it: Direction Source and destination IP address Protocol Source and destination port TCP flags IP fragment flag IP options ICMP types Rules for the first evaluated last rule a deny. If no rule matches.protocol=radius Enumerated Enumerated is derived from the Integer32 AVP Base Format.transport=udp. If absent. aaa-protocol = ( "diameter" / "radius" / "tacacs+" ) The following are examples of valid Diameter host identities: aaa://host. with matched rule terminating the evaluation. Each packet is once.example.transport=tcp aaa://host. The definition contains a list of valid values and their interpretation and is described in the Diameter application introducing the AVP.com:6666. is diameter.protocol=diameter aaa://host. Standards Track [Page 45] .com.com:1813.example. and passed if the last rule was Calhoun.transport=tcp.example. dir proto src and dst Calhoun. ipno/bits An IP number as above with a mask width of the form 1. The "ip" keyword means any protocol will match. the bits part can be set to zero. The bit width MUST be valid for the IP version and the IP number MUST NOT have bits set beyond the mask.2.Drop packets that match the rule.0 to 1.0.255 will match. et al. Only this exact IP number will match the rule.3. the same IP version must be present in the packet that was used in describing the IP address.0/0 or the IPv6 equivalent.RFC 3588 Diameter Based Protocol September 2003 IPFilterRule filters MUST follow the format: action dir proto from src to dst [options] action permit . "out" is to the terminal.2. causing all other addresses to be matched instead. This does not affect the selection of port numbers. In this case. deny .Allow packets that match the rule. The keyword "assigned" is the address or set of addresses assigned to the terminal.3. <address/mask> [ports] The <address/mask> may be specified as: ipno An IPv4 or IPv6 number in dottedquad or canonical IPv6 form. An IP protocol specified by number. a typical first rule is often "deny in ip! assigned" The sense of the match can be inverted by preceding an address with the not modifier (!). To test for a particular IP version.0. Standards Track [Page 46] . The keyword "any" is 0.3. "in" is from the terminal.2. For IPv4. all IP numbers from 1.4/24. For a match to occur. ports[. The absence of a particular option may be denoted with a ’!’. frag may not be used in conjunction with either tcpflags or TCP/UDP port specifications. sack (selective ack). window (tcp window advertisement). See the frag option for details on matching fragmented packets. options: frag Match if the packet is a fragment and this is not the first fragment of the datagram. ts (rfc1323 timestamp) and cc (rfc1644 t/tcp connection count). supported IP options are: The ssrr (strict source route). Fragmented packets that have a non-zero offset (i. optional ports may be specified as: {port/port-port}[..RFC 3588 Diameter Based Protocol September 2003 With the TCP. supported TCP options are: The mss (maximum segment size).]] The ’-’ notation specifies a range of ports (including boundaries). The absence of a particular option may be denoted with a ’!’. Calhoun.e.. ipoptions spec Match if the IP header contains the comma separated list of options specified in spec. established TCP packets only. or ACK bits set. not the first fragment) will never match a rule that has one or more port specifications. lsrr (loose source route). UDP and SCTP protocols. et al.. rr (record packet route) and ts (timestamp).. Standards Track [Page 47] . setup Match packets that have the RST TCP packets only. Match packets that have the SYN bit set but no ACK bit. tcpoptions spec Match if the TCP header contains the comma separated list of options specified in spec. RFC 3588 Diameter Based Protocol September 2003 tcpflags spec TCP packets only. The supported TCP flags are: fin. timestamp request (13). time-to-live exceeded (11). An access device that is unable to interpret or apply a permit rule MAY apply a more restrictive rule. and the ipfw. router solicitation (10). information request (15). redirect (5). An access device that is unable to interpret or apply a deny rule MUST terminate the session. destination unreachable (3). There is one kind of packet that the access device MUST always discard. ack and urg. A rule that contains a tcpflags specification can never match a fragmented packet that has a non-zero offset. rst. An access device MAY apply deny rules of its own before the supplied rules. to try to circumvent firewalls. See the frag option for details on matching fragmented packets.c code may provide a useful base for implementations. address mask request (17) and address mask reply (18). psh. et al. syn. IP header bad (12). router advertisement (9). Standards Track [Page 48] . Calhoun. The rule syntax is a modified subset of ipfw(8) from FreeBSD. Match if the TCP header contains the comma separated list of flags specified in spec. The supported ICMP types are: echo reply (0). source quench (4). but it only has one use. Match if the ICMP type is in the list types. information reply (16). This is a valid packet. The list may be specified as any combination of ranges or individual types separated by commas. The absence of a particular flag may be denoted with a ’!’. for example to protect the access device owner’s infrastructure. timestamp reply (14). echo request (8). icmptypes types ICMP packets only. that is an IP fragment with a fragment offset of one. Both the numeric values and the symbolic values listed below can be used. Meter traffic. proto The format is as described under IPFilterRule. QoSFilterRule filters MUST follow the format: action dir proto from src to dst [options] tag . the packet is treated as best effort. as defined in Section 4.4. to nest them. The metering options MUST be included. If no rule matches. AVPs within an AVP of type Grouped have the same padding requirements as non-Grouped AVPs.’ This implies that the Data field is actually a sequence of AVPs. Each packet is evaluated once. Standards Track [Page 49] . It uses the ASCII charset. The DSCP option MUST be included.RFC 3588 Diameter Based Protocol September 2003 QoSFilterRule The QosFilterRule format is derived from the OctetString AVP Base Format. with the first matched rule terminating the evaluation. et al. 4. that is.. Calhoun. meter dir The format is as described under IPFilterRule. Grouped AVP Values The Diameter protocol allows AVP values of type ’Grouped. An access device that is unable to interpret or apply a QoS rule SHOULD NOT terminate the session. . src and dst The format is as described under IPFilterRule.Mark packet with a specific DSCP [DIFFSERV]. It is possible to include an AVP with a Grouped type within a Grouped type. Standards Track [Page 50] . If absent. Further.com"." = 1*DIGIT . 893. et al.com:33054. Here there are two: Session-Id = "grump. Also note that AVPs may be present in the Grouped AVP value which the receiver cannot interpret (here. The encoding example illustrates how padding is used and how length fields are calculated.23561.example.RFC 3588 Diameter Based Protocol September 2003 One or more Session-Ids must follow. Calhoun. the Recover-Policy and Futuristic-Acct-Record AVPs).except by Diameter implementations which support the same set of AVPs.com:33041.23432.2358.example.0AF3B81" Session-Id = "grump. nor (likely) at the time when the example instance of this AVP is interpreted . Standards Track [Page 51] . +-------+-------+-------+-------+-------+-------+-------+-------+ | ’A’ | ’F’ | ’3’ | ’B’ | ’8’ | ’1’ |Padding|Padding| +-------+-------+-------+-------+-------+-------+-------+-------+ | Session-Id AVP Header (AVP Code = 263). Length = 223 | +-------+-------+-------+-------+-------+-------+-------+-------+ | 0x21 | 0x63 | 0xbc | 0x1d | 0x0a | 0xd8 | 0x23 | 0x71 | +-------+-------+-------+-------+-------+-------+-------+-------+ . Length = 137| +-------+-------+-------+-------+-------+-------+-------+-------+ | 0xfe | 0x19 | 0xda | 0x58 | 0x02 | 0xac | 0xd9 | 0x8b | +-------+-------+-------+-------+-------+-------+-------+-------+ . . et al. Length = 19 | +-------+-------+-------+-------+-------+-------+-------+-------+ | ’e’ | ’x’ | ’a’ | ’m’ | ’p’ | ’l’ | ’e’ | ’. . . Length = 50 | ’g’ | ’r’ | ’u’ | ’m’ | +-------+-------+-------+-------+-------+-------+-------+-------+ . Length = 51 | +-------+-------+-------+-------+-------+-------+-------+-------+ | ’g’ | ’r’ | ’u’ | ’m’ | ’p’ | ’. +-------+-------+-------+-------+-------+-------+-------+-------+ | ’0’ | ’A’ | ’F’ | ’3’ | ’B’ | ’8’ | ’2’ |Padding| +-------+-------+-------+-------+-------+-------+-------+-------+ | Recovery-Policy Header (AVP Code = 8341). .’ | +-------+-------+-------+-------+-------+-------+-------+-------+ | ’c’ | ’o’ | ’m’ |Padding| Session-Id AVP Header | +-------+-------+-------+-------+-------+-------+-------+-------+ | (AVP Code = 263). .RFC 3588 Diameter Based Protocol September 2003 This AVP would be encoded as follows: 0 1 2 3 4 5 6 7 +-------+-------+-------+-------+-------+-------+-------+-------+ | Example AVP Header (AVP Code = 999999). Length = 468 | +-------+-------+-------+-------+-------+-------+-------+-------+ | Origin-Host AVP Header (AVP Code = 264). +-------+-------+-------+-------+-------+-------+-------+-------+ | 0x2f | 0xd7 | 0x96 | 0x6b | 0x8c | 0x7f | 0x92 |Padding| +-------+-------+-------+-------+-------+-------+-------+-------+ | Futuristic-Acct-Record Header (AVP Code = 15930). . . Standards Track [Page 52] . . +-------+-------+-------+-------+-------+-------+-------+-------+ | 0x41 |Padding|Padding|Padding| +-------+-------+-------+-------+ 0 8 16 24 32 64 72 80 104 112 120 320 328 336 464 Calhoun.’ | ’e’ | ’x’ | +-------+-------+-------+-------+-------+-------+-------+-------+ .. Standards Track [Page 53] .RFC 3588 Diameter Based Protocol September 2003 4. a "P" in the "MAY" column means that if a message containing that AVP is to be sent via a Diameter agent (proxy. types. Similarly. Diameter Base Protocol AVPs The following table describes the Diameter AVPs defined in the base protocol. Due to space constraints. "Encr" (Encryption) means that if a message containing that AVP is to be sent via a Diameter agent (proxy. et al. Calhoun. possible flag values and whether the AVP MAY be encrypted.5. for the originator of a Diameter message.. their AVP Code values. the short form DiamIdent is used to represent DiameterIdentity. 21 Time | M | P | | V | N | Experimental297 7.6 Grouped | M | P | | V | N | Result | | | | | | -----------------------------------------|----+-----+----+-----|----| Calhoun.3 Unsigned32 | M | P | | V | Y | Record-Number | | | | | | Accounting480 9.1 Enumerated | M | P | | V | Y | Record-Type | | | | | | Accounting44 9.8 Unsigned32 | M | P | | V | N | Application-Id | | | | | | Auth-Request274 8.8. Standards Track [Page 54] .294 7.9 Unsigned32 | M | P | | V | N | Application-Id | | | | | | Auth258 6.2 Unsigned32 | M | P | | V | Y | Interim-Interval | | | | | | Accounting483 9.8.8.10 Unsigned32 | M | P | | V | N | Period | | | | | | Auth-Session277 8.M | N | Host | | | | | | Event-Timestamp 55 8.3 Enumerated | M | P | | V | N | E2E-Sequence AVP 300 6.8.20 OctetString| M | P | | V | Y | Destination-Host 293 6.6 DiamIdent | M | P | | V | N | Realm | | | | | | Disconnect-Cause 273 5.15 Grouped | M | P | | V | Y | Error-Message 281 7. et al.9 Unsigned32 | M | P | | V | N | Lifetime | | | | | | Auth-Grace276 8.5 DiamIdent | M | P | | V | N | Destination283 6.4 DiamIdent | | P | | V.8.11 Enumerated | M | P | | V | N | State | | | | | | Re-Auth-Request.285 8.3 UTF8String | | P | | V.7 Enumerated | M | P | | V | N | Type | | | | | | Authorization291 8.8.4.5 UTF8String | M | P | | V | Y | Multi-Session-Id | | | | | | Accounting485 9.8.12 Enumerated | M | P | | V | N | Type | | | | | | Class 25 8.M | N | Error-Reporting.7 Enumerated | M | P | | V | Y | Realtime-Required | | | | | | Acct50 9.4 OctetString| M | P | | V | Y | Session-Id | | | | | | Accounting287 9.RFC 3588 Diameter Based Protocol September 2003 +---------------------+ | AVP Flag rules | |----+-----+----+-----|----+ AVP Section | | |SHLD| MUST| | Attribute Name Code Defined Data Type |MUST| MAY | NOT| NOT|Encr| -----------------------------------------|----+-----+----+-----|----| Acct85 9.6 Unsigned64 | M | P | | V | Y | Sub-Session-Id | | | | | | Acct259 6. 7.1 DiamIdent | M | | | P.3 DiamIdent | M | | | P.17 Unsigned32 | M | P | | V | Y | Session-Server.4 OctetString| M | | | P.7 UTF8String | | | |P.260 6.5 Grouped | M | P | | V | N | Firmware267 5.RFC 3588 Diameter Based Protocol September 2003 +---------------------+ | AVP Flag rules | |----+-----+----+-----|----+ AVP Section | | |SHLD| MUST|MAY | Attribute Name Code Defined Data Type |MUST| MAY | NOT| NOT|Encr| -----------------------------------------|----+-----+----+-----|----| Experimental298 7.16 Unsigned32 | M | P | | V | N | Product-Name 269 5.7.8 UTF8String | M | P | | V | Y | Session-Timeout 27 8.12 DiamURI | M | P | | V | N | Redirect-Host261 6.M| N | Revision | | | | | | Host-IP-Address 257 5.1 Unsigned32 | M | P | | V | N | Route-Record 282 6.13 Enumerated | M | P | | V | N | Usage | | | | | | Redirect-Max262 6.6 Unsigned32 | M | P | | V | N | Vendor-Id | | | | | | Termination295 8.V.2 Grouped | M | | | P.V | N | Proxy-State 33 6.V | N | Proxy-Info 284 6.4 Unsigned32 | | | |P.13 Unsigned32 | M | P | | V | N | Session-Binding 270 8.3.V.14 Unsigned32 | M | P | | V | N | Cache-Time | | | | | | Result-Code 268 7.18 Enumerated | M | P | | V | Y | Failover | | | | | | Supported265 5.4 DiamIdent | M | P | | V | N | Origin-State-Id 278 8.14 UTF8String | M | P | | V | Y | Vendor-Id 266 5. et al.7 Unsigned32 | M | P | | V | N | Result-Code | | | | | | Failed-AVP 279 7.271 8.11 Grouped | M | P | | V | N | Application-Id | | | | | | -----------------------------------------|----+-----+----+-----|----| Calhoun.M| N | Proxy-Host 280 6.3.5 Address | M | P | | V | N | Inband-Security | M | P | | V | N | -Id 299 6.V | N | Redirect-Host 292 6.3.7.19 Unsigned32 | M | P | | V | Y | Time-Out | | | | | | Origin-Host 264 6.V | N | Session-Id 263 8.3. Standards Track [Page 55] .15 Enumerated | M | P | | V | N | Cause | | | | | | User-Name 1 8.3.3 DiamIdent | M | P | | V | N | Origin-Realm 296 6.3 Unsigned32 | M | P | | V | N | Vendor-Specific.10 Unsigned32 | | | | | | Multi-Round272 8.7. all messages for a realm are sent to the primary peer. Peer Connections Although a Diameter node may have many possible peers that it is able to communicate with.RFC 3588 Diameter Based Protocol September 2003 5. but failover procedures are invoked. which could occur for various reasons. known as the primary and secondary peers. When an active peer is moved to this mode. a node MAY have additional connections. and assume the role of either primary or secondary. When a peer is deemed suspect.1. if it is deemed necessary. Note that a given peer MAY act as a primary for a given realm. 2. At a minimum. There are two ways that a peer is removed from the suspect peer list: 1. any pending requests are sent to the secondary peer.2. The peer is moved to the closed state. Standards Track [Page 56] . Diameter Peer Discovery Allowing for dynamic Diameter agent discovery will make it possible for simpler and more robust deployment of Diameter services. including not receiving a DWA within an allotted timeframe. causing the transport connection to be shutdown. These are based Calhoun. et al. it may not be economical to have an established connection to all of them. The peer is no longer reachable. Three watchdog messages are exchanged with accepted round trip times. However. an alternate peer SHOULD replace the deleted peer. 5. Typically. but in the event that failover procedures are invoked. 5. a Diameter node SHOULD have an established connection with two peers per realm. Of course. while acting as a secondary for another realm. Diameter Peers This section describes how Diameter nodes establish connections and communicate with peers. In the event the peer being removed is either the primary or secondary. the following mechanisms are described. additional connections SHOULD be established to ensure that the necessary number of active connections exists. and the connection to the peer is considered stabilized. In order to promote interoperable implementations of Diameter peer discovery. no new requests should be forwarded to the peer. implementations are free to load balance requests between a set of peers. The first is when a Diameter client needs to discover a first-hop Diameter agent. This specification defines D2T for TCP and D2S for SCTP. The Diameter implementation has to know in advance which realm to look for a Diameter agent in. The Diameter service template [TEMPLATE] is included in Appendix A. there will be multiple NAPTR records. It is recommended that SLPv2 security be deployed (this requires distributing keys to SLPv2 agents).RFC 3588 Diameter Based Protocol September 2003 on existing IETF standards. while the latter two options (SRVLOC and DNS) MAY be supported. 3. Standards Track [Page 57] . The second case is when a Diameter agent needs to discover another agent . If the server supports multiple transport protocols. for example. the client Calhoun. The Diameter implementation performs a NAPTR query for a server in a particular realm. each with a different service value. 2. where x is a letter that corresponds to a transport protocol supported by the domain. We also establish an IANA registry for NAPTR service name to transport protocol mappings.1 The services relevant for the task of transport protocol selection are those with NAPTR service fields with values "AAA+D2x". There are two cases where Diameter peer discovery may be performed. The first option (manual configuration) MUST be supported by all DIAMETER nodes. SLPv2 is discussed further in Appendix A. This could be deduced. to the SRV record for contacting a server with the specific transport protocol in the NAPTR services field. As per RFC 2915 [NAPTR]. This is discussed further in Appendix A. The Diameter implementation consults its list of static (manually) configured Diameter agent locations. These NAPTR records provide a mapping from a domain. The Diameter implementation uses SLPv2 [SLP] to discover Diameter services. et al. the following ’search order’ is recommended: 1. These will be used if they exist and respond. from the ’realm’ in a NAI that a Diameter implementation needed to perform a Diameter operation on. which is the SRV record for that particular transport protocol. SLPv2 security SHOULD be used (requiring distribution of keys to SLPv2 agents) in order to ensure that discovered peers are authorized for their roles. In both cases. 3. The resource record will contain an empty regular expression and a replacement value.for further handling of a Diameter operation. realm. A dynamically discovered peer causes an entry in the Peer Table (see Section 2. the domain name in the SRV query and the domain name in the target in the SRV record MUST both be valid based on the same site certificate. a web server may have obtained a valid TLS certificate.6) to be created. Note that entries created via DNS MUST expire (or be refreshed) within the DNS TTL. for values of X that indicate transport protocols supported by the client. the requester queries for those address records for the destination address. The NAPTR processing as described in RFC 2915 will result in discovery of the most preferred transport protocol of the server that is supported by the client. Alternatively this can be achieved by definition of OIDs within TLS or IKE certificates so as to signify Diameter Server authorization. or validation of DNS RRs via DNSSEC is not sufficient to conclude this.2 A client MUST discard any service fields that identify a resolution service whose value is not "D2X"._tcp’._sctp’. Standards Track [Page 58] .realm or ’_diameter. several rules are defined. If the DNS server returns no address records. If no NAPTR records are found. AAAA RR’s or other similar records. Authentication via IKE or TLS. and secured RRs may be included in the DNS. ’_diameter. the domain name in the query and the domain name in the replacement field MUST both be valid based on the site certificate handed out by the server in the TLS or IKE exchange. Similarly. as well as an SRV record for the server. the requestor gives up. by configuration of a Diameter Server CA. an attacker could modify the DNS records to contain replacement values in a different domain. Authorization can be achieved for example. or the result of an attack Also. For example. 3. 4. For the purposes of this specification. Otherwise. The domain suffixes in the NAPTR replacement field SHOULD match the domain of the original query. Address records include A RR’s. but this does not imply that it is authorized to act as a Diameter Server. If a peer is discovered Calhoun.RFC 3588 Diameter Based Protocol September 2003 discards any records whose services fields are not applicable. If the server is using a site certificate. and the client could not validate that this was the desired behavior. the Diameter Peer MUST check to make sure that the discovered peers are authorized to act in its role. et al. chosen according to the requestor’s network protocol capabilities. Similarly.4) MUST be interpreted as having common applications with the peer. they MUST exchange the Capabilities Exchange messages. commands to its peers that have advertised application that defines the command. CERs received from unknown peers MAY be silently discarded. Note that receiving a CER or CEA from a peer advertising itself as a Relay (see Section 2. and SHOULD disconnect the transport layer connection.3. Standards Track [Page 59] . the lifetime of the peer entry is equal to the lifetime of the transport connection. routing table entry (see Section 2. the transport connection is closed. The CER and CEA messages MUST NOT be proxied. Since the CER/CEA messages cannot be proxied. a successful CEA MAY be returned. supported Diameter applications. In such instances. as specified in the peer state machine (see Section 5. and SHOULD disconnect the transport layer connection.7) for the peer’s realm is created. security mechanisms. If the local policy permits receiving CERs from unknown hosts. Capabilities Exchange When two Diameter peers establish a transport connection. the ’E’ bit is set in the answer Calhoun. or a CEA MAY be issued with the Result-Code AVP set to DIAMETER_UNKNOWN_PEER.) The receiver only issues support for the Diameter Diameter node MUST cache ensure that unrecognized sent to a peer. This message allows the discovery of a peer’s identity and its capabilities (protocol version number.6). The routing table entry’s expiration MUST match the peer’s expiration value. all the pending transactions destined to the unknown peer can be discarded.RFC 3588 Diameter Based Protocol September 2003 outside of the local realm. a receiver of a Capabilities-Exchange-Req (CER) message that does not have any security mechanisms in common with the sender MUST return a Capabilities-Exchange-Answer (CEA) with the Result-Code AVP set to DIAMETER_NO_COMMON_SECURITY. If a CER from an unknown peer is answered with a successful CEA. redirected or relayed. et al. In both cases. 5. etc. In case of a transport failure. it is still possible that an upstream agent receives a message for which it has no available peers to handle the application that corresponds to the Command-Code. REQ > Origin-Host } Origin-Realm } Host-IP-Address } Vendor-Id } Product-Name } Origin-State-Id ] Supported-Vendor-Id ] Auth-Application-Id ] Inband-Security-Id ] Acct-Application-Id ] Vendor-Specific-Application-Id ] Firmware-Revision ] AVP ] Capabilities-Exchange-Answer The Capabilities-Exchange-Answer (CEA). et al.. Calhoun. 5.3. MAY only be forwarded to a host that has explicitly advertised support for the application (or has advertised the Relay Application Identifier). is sent in response to a CER message.RFC 3588 Diameter Based Protocol September 2003 message (see Section 7. the Capabilities-Exchange-Request message MUST contain one Host-IPAddress AVP for each potential IP address that MAY be locally used when transmitting Diameter messages. or a message with an application-specific command code. Standards Track [Page 60] . With the exception of the Capabilities-Exchange-Request message. is sent to exchange local capabilities. Diameter Header: 257.) with the Result-Code AVP set to DIAMETER_UNABLE_TO_DELIVER to inform the downstream to take action (e. a message of type Request that includes the Auth-Application-Id or Acct-Application-Id AVPs.1.3. When Diameter is run over SCTP [SCTP]. which allows for connections to span multiple interfaces and multiple IP addresses. re-routing request to an alternate peer). Message Format <CER> ::= < { { 1* { { { [ * [ * [ * [ * [ * [ [ * [ 5. indicated by the Command-Code set to 257 and the Command Flags’ ’R’ bit cleared.2.g. indicated by the CommandCode set to 257 and the Command Flags’ ’R’ bit set. Upon detection of a transport failure. Capabilities-Exchange-Request The Capabilities-Exchange-Request (CER). this message MUST NOT be sent to an alternate peer. Product-Name (Section 5.4) AVPs MAY provide very useful debugging information.3. Message Format <CEA> ::= < { { { 1* { { { [ [ * [ * [ * [ * [ * [ * [ [ * [ 5. the Capabilities-Exchange-Answer message MUST contain one Host-IP-Address AVP for each potential IP address that MAY be locally used when transmitting Diameter messages. this MAY be used in order to know which vendor specific attributes may be sent to the peer. Calhoun. It is also envisioned that the combination of the Vendor-Id.RFC 3588 Diameter Based Protocol September 2003 When Diameter is run over SCTP [SCTP]. A Vendor-Id value of zero in the CER or CEA messages is reserved and indicates that this field is ignored.3.4.6).3. which allows connections to span multiple interfaces.. Firmware-Revision AVP The Firmware-Revision AVP (AVP Code 267) is of type Unsigned32 and is used to inform a Diameter peer of the firmware revision of the issuing device. Standards Track [Page 61] .7) and the Firmware-Revision (Section 5. et al.3.3.3. hence. multiple IP addresses. 5. In combination with the Supported-Vendor-Id AVP (Section 5. the revision of the Diameter software module may be reported instead. as stated in Section 2. 5. 5. This AVP MUST ONLY be used in the CER and CEA messages. Host-IP-Address AVP The Host-IP-Address AVP (AVP Code 257) is of type Address and is used to inform a Diameter peer of the sender’s IP address. and contains the vendor assigned name for the product. and will most likely assume that a connectivity problem occurred.6. The Product-Name AVP SHOULD remain constant across firmware revisions for the same product. a periodic connection request would not be welcomed. In these cases.3. In the event that the disconnect was a result of either a shortage of internal resources. and that the peer shouldn’t reconnect unless it has a valid reason to do so (e.1. All source addresses that a Diameter node expects to use with SCTP [SCTP] MUST be advertised in the CER and CEA messages by including a Host-IPAddress AVP for each address. the peer may periodically attempt to reconnect. its peer cannot know the reason for the disconnect.5. message to be forwarded). The Disconnect-Peer-Request message is used by a Diameter node to inform its peer of its intent to disconnect the transport layer.7. Product-Name AVP The Product-Name AVP (AVP Code 269) is of type UTF8String.. or that the peer has rebooted.g.4. or simply that the node in question has no intentions of forwarding any Diameter messages to the peer in the foreseeable future. 5. The Disconnection-Reason AVP contains the reason the Diameter node issued the Disconnect-Peer-Request message.3. Standards Track [Page 62] . 5. Disconnecting Peer connections When a Diameter node disconnects one of its transport connections. Upon receipt of the message. et al.RFC 3588 Diameter Based Protocol September 2003 For devices that do not have a firmware revision (general purpose computers running Diameter software modules.3.. the Calhoun. for instance). indicated by the Command-Code set to 282 and the Command Flags’ ’R’ bit cleared.4.4. A Diameter node MUST include this AVP in the Disconnect-Peer-Request message to inform the peer of the reason for its intention to shutdown the transport connection. et al. Message Format <DPA> ::= < { { { [ * [ Diameter Header: 282 > Result-Code } Origin-Host } Origin-Realm } Error-Message ] Failed-AVP ] 5. which SHOULD contain an error if messages have recently been forwarded. The receiver of the Disconnect-Peer-Answer initiates the transport disconnect. Upon receipt of this message.1. which would otherwise cause a race condition.4. is sent as a response to the Disconnect-Peer-Request message.2. Upon detection of a transport failure. Disconnect-Cause AVP The Disconnect-Cause AVP (AVP Code 273) is of type Enumerated.RFC 3588 Diameter Based Protocol September 2003 Disconnect-Peer-Answer is returned. The following values are supported: Calhoun. indicated by the Command-Code set to 282 and the Command Flags’ ’R’ bit set. is sent to a peer to inform its intentions to shutdown the transport connection. this message MUST NOT be sent to an alternate peer. 5. Disconnect-Peer-Request The Disconnect-Peer-Request (DPR). Disconnect-Peer-Answer The Disconnect-Peer-Answer (DPA). and are likely in flight. Message Format <DPR> ::= < { { { Diameter Header: 282. REQ > Origin-Host } Origin-Realm } Disconnect-Cause } 5. Standards Track [Page 63] . the transport connection is shutdown.3. DO_NOT_WANT_TO_TALK_TO_YOU 2 The peer has determined that it does not see a need for the transport connection to exist.1. Transport Failure Detection Given the nature of the Diameter protocol. is sent as a response to the Device-Watchdog-Request message. and will provide better failover performance. Upon detection of a transport failure. since it does not expect any messages to be exchanged in the near future. this message MUST NOT be sent to an alternate peer. et al. resulting in unnecessary delays. 5. 5. is sent to a peer when no traffic has been exchanged between two peers (see Section 5.2. REQ > Origin-Host } Origin-Realm } Origin-State-Id ] 5. are used to proactively detect transport failures. The Device-Watchdog-Request and DeviceWatchdog-Answer messages. Message Format <DWR> ::= < { { [ Diameter Header: 280.5.5. Calhoun.5.RFC 3588 Diameter Based Protocol September 2003 REBOOTING 0 A scheduled reboot is imminent. it is recommended that transport failures be detected as soon as possible. Detecting such failures will minimize the occurrence of messages sent to unavailable agents. Device-Watchdog-Request The Device-Watchdog-Request (DWR). BUSY 1 The peer’s internal resources are constrained. indicated by the Command-Code set to 280 and the Command Flags’ ’R’ bit cleared.5. and it has determined that the transport connection needs to be closed. Standards Track [Page 64] . indicated by the Command-Code set to 280 and the Command Flags’ ’R’ bit set. defined in this section.3). Device-Watchdog-Answer The Device-Watchdog-Answer (DWA). The Hop-by-Hop Identifier field is used to match the answer with the queued request. it is necessary for all pending request messages to be forwarded to an alternate agent. Transport Failure Algorithm The transport failure algorithm is defined in [AAATRANS]. Calhoun. All Diameter implementations MUST support the algorithm defined in the specification in order to be compliant to the Diameter base protocol. Failover and Failback Procedures In the event that a transport failure is detected with a peer. This is commonly referred to as failover. When a transport failure is detected. The End-to-End Identifier field in the Diameter header along with the Origin-Host AVP MUST be used to identify duplicate messages. if possible all messages in the queue are sent to an alternate agent with the T flag set. et al. 5. and the unavailable peer is the message’s final destination (see Destination-Host AVP). On booting a Diameter client or agent. It is important to note that multiple identical requests or answers MAY be received as a result of a failover.5. if possible.3. the corresponding request is removed from the queue.5. it is necessary for the node to maintain a pending message queue for a given peer. When an answer message is received. In order for a Diameter node to perform failover procedures. the T flag is also set on any records still remaining to be transmitted in non-volatile storage. Standards Track [Page 65] . An example of a case where it is not possible to forward the message to an alternate server is when the message has a fixed destination.RFC 3588 Diameter Based Protocol September 2003 Message Format <DWA> ::= < { { { [ * [ [ Diameter Header: 280 > Result-Code } Origin-Host } Origin-Realm } Error-Message ] Failed-AVP ] Original-State-Id ] 5.4. Such an error requires that the agent return an answer message with the ’E’ bit set and the Result-Code AVP set to DIAMETER_UNABLE_TO_DELIVER. All subsequent messages are sent on the surviving connection. Note in particular that [AAATRANS] requires the use of watchdog messages to probe connections. I-Open and R-Open. and reopen transport connections. I. a TLS handshake will begin when both ends are in the open state. DWR and DWA messages are to be used. failover. The stable states that a state machine may be in are Closed. messages can once again be forwarded to the peer. while the R. which is used to open. Standards Track [Page 66] . A CER message is always sent on the initiating connection immediately after the connection request is successfully completed. Calhoun. all further messages will be sent via TLS.RFC 3588 Diameter Based Protocol September 2003 As described in Section 2. Note that I-Open and R-Open are equivalent except for whether the initiator or responder transport connection is used for communication.is used to represent the responder (listening) connection. one of the two connections will shut down. 5. In the case of an election. as space requires.is used to represent the initiator (connecting) connection. For Diameter. Peer State Machine This section contains a finite state machine that MUST be observed by all Diameter implementations. Similarly. a connection request should be periodically attempted with the failed peer in order to re-establish the transport connection. the initiator connection will survive if the peer’s Origin-Host is higher. probe. Multiple actions are separated by commas. all other states are intermediate. Once a connection has been successfully established. The state machine constrains only the behavior of a Diameter implementation as seen by Diameter peers through events on the wire. Each Diameter node MUST follow the state machine described below when communicating with each peer. and may continue on succeeding lines. state and next state may also span multiple lines. Note that the results of an election on one peer are guaranteed to be the inverse of the results on the other. The lack of a prefix indicates that the event or action is the same regardless of the connection on which the event occurred. This is commonly referred to as failback. If the handshake fails. The responder connection will survive if the Origin-Host of the local Diameter entity is higher than that of the peer. If the TLS handshake is successful. close. This state machine is closely coupled with the state machine described in [AAATRANS]. et al. as space requires.6. both ends move to the closed state. For TLS usage.1. R-Open Process-CER. R-Snd-CEA Wait-Conn-Ack I-Rcv-Conn-Ack I-Rcv-Conn-Nack R-Conn-CER Timeout Wait-I-CEA I-Rcv-CEA R-Conn-CER I-Snd-CER Cleanup R-Accept. Process-CER Error Process-CEA R-Accept.Elect R-Snd-CEA R-Disc R-Reject Error I-Disc. et al.. Standards Track [Page 67] . R-Snd-CEA R-Disc R-Disc R-Reject Error R-Snd-Message Process Process-DWR. R-Snd-DWA Process-DWA R-Reject R-Snd-DPR R-Snd-DPA. Process-CER. state event action next state ----------------------------------------------------------------Closed Start I-Snd-Conn-Req Wait-Conn-Ack R-Conn-CER R-Accept.RFC 3588 Diameter Based Protocol September 2003 Any implementation that produces equivalent results is considered compliant.R-Snd-CEA I-Disc. Elect I-Disc Error Error I-Snd-CER. Standards Track [Page 68] . and the source port of an incoming connection is arbitrary. possible to know the identity of that peer until a CER is received from it. it is not. The logic that handles incoming connections SHOULD close and discard the connection if any message other than CER arrives. the identity of the connecting peer can be uniquely determined from Origin-Host. and the new connection and CER are passed to the state machine as an R-Conn-CER event. the Origin-Host that identifies the peer is used to locate the state machine associated with that peer. in the general case. or if an implementation-defined timeout occurs prior to receipt of CER. Incoming connections When a connection request is received from a Diameter peer. a Diameter peer must employ logic separate from the state machine to receive connection requests. For this reason. I-Snd-DWA Process-DWA R-Reject I-Snd-DPR I-Snd-DPA. it is described separately in this section rather than in the state machine above. Upon receipt of CER. separate from that of any individual state machine associated with a particular peer. This is because host and port determine the identity of a Diameter peer. Calhoun.1. accept them. and await CER. Because handling of incoming connections up to and including receipt of CER requires logic. et al. Once CER arrives on a new connection. I-Disc I-Disc I-Snd-CEA Process-CEA I-Disc R-Disc Error I-Disc R-Disc Closed R-Open R-Open I-Open I-Open I-Open I-Open I-Open Closing Closed Closed I-Open I-Open Closed Closed Closed Closed Closed 5.6. et al. and the local node was the winner. A message is to be sent. but would occur on one of two possible connections.g.2. A positive acknowledgement is received confirming that the transport connection is established. DPA. DWR or DWA was received. and the associated CER has arrived. since the actual event would be identical. CEA. R-Conn-CER Rcv-Conn-Ack Rcv-Conn-Nack Timeout Rcv-CER Rcv-CEA Rcv-Non-CEA Peer-Disc Rcv-DPR Rcv-DPA Win-Election Send-Message Rcv-Message Stop Calhoun. An acknowledgement is received stating that the transport connection has been established.. A negative acknowledgement was received stating that the transport connection was not established. An election was held. Standards Track [Page 69] . DPR. A CER message from the peer was received. Start The Diameter application has signaled that a connection should be initiated with the peer.6. we will ignore the -I and -R prefix. A message other than CEA from the peer was received. A DPR message from the peer was received. Events Transitions and actions in the automaton are caused by events. A disconnection indication from the peer was received. The Diameter application has signaled that a connection should be terminated (e. A CEA message from the peer was received. An application-defined timer has expired while waiting for some event. A DPA message from the peer was received.RFC 3588 Diameter Based Protocol September 2003 5. on system shutdown). A message other than CER. In this section. 6. et al. Actions Actions in the automaton are caused by events and typically indicate the transmission of packets and/or an action to be taken on the connection. The CER associated with the R-Conn-CER is processed.6. Local resources are freed. In this section we will ignore the I. A DPR message is sent to the peer. A DWA message is sent. The DWR message is serviced. The incoming connection associated with the R-Conn-CER is accepted as the responder connection. An election occurs (see Section 5.4 for more information). either politely or abortively. The transport layer connection is disconnected. Snd-Conn-Req Accept A transport connection is initiated with the peer.RFC 3588 Diameter Based Protocol September 2003 5. The incoming connection associated with the R-Conn-CER is disconnected. Standards Track [Page 70] . If necessary.3. A DWR message is sent. but would occur on one of two possible connections. in response to an error condition. A received CEA is processed. and any local resources are freed. A CER message is sent to the peer. A DPA message is sent to the peer. The transport layer connection is disconnected. and local resources are freed. since the actual action would be identical. the connection is shutdown. Reject Process-CER Snd-CER Snd-CEA Cleanup Error Process-CEA Snd-DPR Snd-DPA Disc Elect Snd-Message Snd-DWR Snd-DWA Process-DWR Calhoun. A CEA message is sent to the peer. A message is sent.and R-prefix. but MUST NOT contain a Destination-Host AVP. in one of these three combinations: a request that is not able to be proxied (such as CER) MUST NOT contain either Destination-Realm or Destination-Host AVPs. otherwise. Standards Track [Page 71] . 6. Any remaining octets are assumed to have value 0x80. If the local Diameter entity’s Origin-Host is higher than the peer’s. The DWA message is serviced.RFC 3588 Diameter Based Protocol September 2003 Process-DWA Process 5. but not to a specific server (such as the first request of a series of round-trips). Diameter Request Routing Overview A request is sent towards its final destination using a combination of the Destination-Realm and Destination-Host AVPs. a request that needs to be sent to a specific home server among those serving a given realm. - - The Destination-Host AVP is used as described above when the destination of the request is fixed. Calhoun. The responder compares the Origin-Host received in the CER sent by its peer with its own Origin-Host. 6.1. The Election Process The election is performed on the responder.4. et al. Diameter message processing This section describes how Diameter requests and answers are created and processed. a Win-Election event is issued locally. MUST contain a DestinationRealm AVP. a request that needs to be sent to a home server serving a specific realm.6. A message is serviced. then performing an octet-by-octet unsigned comparison with the first octet being most significant. MUST contain both the Destination-Realm and Destination-Host AVPs. which includes: Authentication requests that span multiple round trips A Diameter message that uses a security mechanism that makes use of a pre-established session key shared between the source and the final destination of the message. The comparison proceeds by considering the shorter OctetString to be padded with zeros so that it length is the same as the length of the longer. When a message is received. If none of the above is successful.. Otherwise.1. Note the processing rules contained in this section are intended to be used as general guidelines to Diameter developers. or other applicationspecific methods. Request messages that may be forwarded by Diameter agents (proxies. The Destination-Realm AVP MUST be present if the message is proxiable. This is known as Request Forwarding. If the message is intended for a Diameter peer with whom the local host is able to directly communicate. A message that MUST NOT be forwarded by Diameter agents (proxies.g. The value of the Destination-Realm AVP MAY be extracted from the User-Name AVP. Calhoun.RFC 3588 Diameter Based Protocol September 2003 - Server initiated messages that MUST be received by a specific Diameter client (e.6 are followed. If the message is destined for the local host. redirects or relays) MUST also contain an AcctApplication-Id AVP.7). which is known as Request Routing. the procedures listed in Section 6. which is used to request that a particular user’s session be terminated. Note that an agent can forward a request to a host described in the Destination-Host AVP only if the host in question is included in its peer table (see Section 2. 4. the procedures listed in Section 6. an Auth-Application-Id AVP or a Vendor-SpecificApplication-Id AVP. an answer is returned with the Result-Code set to DIAMETER_UNABLE_TO_DELIVER. all Diameter nodes within the realm MUST be peers.1. 3.1.4 are followed. For routing of Diameter messages to work within an administrative domain. Standards Track [Page 72] . such as the Abort-SessionRequest message. et al.5 are followed. The procedures listed in Section 6.1. and still comply with the protocol specification. 2. the request is routed based on the Destination-Realm only (see Sections 6. redirects or relays) MUST not include the Destination-Realm in its ABNF. the message is processed in the following order: 1. with the E-bit set. Certain implementations MAY use different methods than the ones described here.6). access device). See Section 7 for more detail on error handling. 1.. A loop is detected if the server finds its own identity in a Route-Record AVP. 6. Originating a Request When creating a request. in addition to any other procedures described in the application definition for that specific request.4.1.1.1. an Auth-Application-Id or a VendorSpecific-Application-Id AVP must be included if the request is proxiable. Sending a Request - - 6.RFC 3588 Diameter Based Protocol September 2003 6.1. 6. the agent MUST answer with the Result-Code AVP set to DIAMETER_LOOP_DETECTED. Processing Local Requests A request is known to be for local consumption when one of the following conditions occur: The Destination-Host AVP contains the local host’s identity.3. et al.2. When such an event occurs. When sending a request. originated either locally. Receiving Requests A relay or proxy agent MUST check for forwarding loops when receiving requests. Calhoun. Other actions to perform on the message based on the particular role the agent is playing are described in the following sections. an Acct-Application-Id AVP.1. or as the result of a forwarding or routing operation. the following procedures MUST be followed: the Hop-by-Hop Identifier should be set to a locally unique value The message should be saved in the list of pending requests. Standards Track [Page 73] . Redirecting requests When a redirect agent receives a request whose routing entry is set to REDIRECT. or Both the Destination-Host and the Destination-Realm are not present. Calhoun.6.2 should be used to generate the corresponding answer. A Diameter message that may be forwarded by Diameter agents (proxies.RFC 3588 Diameter Based Protocol September 2003 - The Destination-Host AVP is not present.7). and MAY have a list of externally supported realms and applications. which is in the form of a Network Access Identifier (NAI). Diameter agents MAY have a list of locally supported realms and applications. The Diameter peer table contains all of the peers that the local node is able to directly communicate with. and include the Result-Code AVP to DIAMETER_REDIRECT_INDICATION. 6. et al.1.7. the message SHOULD be forwarded to the peer. 6.5. and the Diameter application is locally supported. Standards Track [Page 74] . the message is routed to the peer configured in the Realm Routing Table (see Section 2. it MUST reply with an answer message with the ’E’ bit set. Request Routing Diameter request message routing is done via realms and applications.1.1. Request Forwarding Request forwarding is done using the Diameter Peer Table. redirects or relays) MUST include the target realm in the Destination-Realm AVP and one of the application identification AVPs Auth-Application-Id. When a request is received. The realm MAY be retrieved from the User-Name AVP. Acct-Application-Id or Vendor-SpecificApplication-Id. the Destination-Realm AVP contains a realm the server is configured to process locally. the rules in Section 6. 6. The realm portion of the NAI is inserted in the Destination-Realm AVP. Each of the servers associated with the routing entry are added in separate Redirect-Host AVP. - When a request is locally processed. When a request is received that includes a realm and/or application that is not locally supported. and the host encoded in the DestinationHost AVP is one that is present in the peer table. while maintaining the Hop-by-Hop Identifier in the header. 6. Relaying and Proxying Requests A relay or proxy agent MUST append a Route-Record AVP to all requests forwarded. Standards Track [Page 75] . Proxy-Info AVP has certain security implications and SHOULD contain an embedded HMAC with a node-local key. The source of the request is also saved.8. Alternatively. The message is then forwarded to the next hop. and the request is sent directly to it. which includes the IP address.com |------------->| example. Answer +-------------+ Figure 5: Diameter Redirect Agent The receiver of the answer message with the ’E’ bit set. The Hop-by-Hop identifier in the request is saved. and the Result-Code AVP set to DIAMETER_REDIRECT_INDICATION uses the hop-byhop field in the Diameter header to identify the request in the pending message queue (see Section 5. et al. The AVP contains the identity of the peer the request was received from. one is created. command + ’E’ bit 1. it MAY simply use local storage to store state information. A relay or proxy agent MAY include the Proxy-Info AVP in requests if it requires access to any local state information when the corresponding response is received.1.net | | Relay | | Diameter | | Agent |<-------------| Server | +-------------+ 4. port and protocol. and replaced with a locally unique value. The receiver of the answer message with the ’E’ bit set selects exactly one of these hosts as the destination of the redirected message. Request | | Result-Code = joe@example. Calhoun. Request +-------------+ | example.com | | DIAMETER_REDIRECT_INDICATION + | | Redirect-Host AVP(s) | v +-------------+ 3. If no transport connection exists with the new agent.RFC 3588 Diameter Based Protocol September 2003 +------------------+ | Diameter | | Redirect Agent | +------------------+ ^ | 2. as identified in the Realm Routing Table. Multiple Redirect-Host AVPs are allowed.3) that is to be redirected. 2.net (Answer) example.net) (Origin-Realm=mno. et al. The ’P’ bit is set to the same value as the one in the request. Any Proxy-Info AVPs in the request MUST be added to the answer message. - - - - - Calhoun. The same End-to-End identifier in the request is used in the answer. it MUST be included in the answer.com (Origin-Host=hms.net) (Origin-Host=nas. If the Session-Id is present in the request.example.example. The local host’s identity is encoded in the Origin-Host AVP.mno.example.com) (Route-Record=nas.net) +------+ ------> +------+ ------> +------+ | | (Request) | | (Request) | | | NAS +-------------------+ DRL +-------------------+ HMS | | | | | | | +------+ <-----+------+ <-----+------+ example. The Destination-Host and Destination-Realm AVPs MUST NOT be present in the answer message.net (Answer) example. (Origin-Host=nas.mno. Standards Track [Page 76] .com) (Origin-Realm=example. The Result-Code AVP is added with its value indicating success or failure. in addition to any additional procedures that MAY be discussed in the Diameter application defining the command: The same Hop-by-Hop identifier in the request is used in the answer.net) (Origin-Realm=mno. in the same order they were present in the request.com) (Origin-Host=hms.com) Figure 6: Routing of Diameter messages 6.com) (Origin-Realm=example. Diameter Answer Processing When a request is locally processed.net) (Destination-Realm=example. the following procedures MUST be applied to create the associated answer.RFC 3588 Diameter Based Protocol September 2003 Figure 6 provides an example of message routing using the procedures listed in these sections.com) (DestinationRealm=example. and MUST be present in all Diameter messages. The agent MUST then send the answer to the host that it received the original request from. Note that the Origin-Host AVP may resolve to more than one address as the Diameter peer may support more than one address. the agent MUST restore the original value of the Diameter header’s Hopby-Hop Identifier field. Standards Track [Page 77] .2.3. It SHOULD ignore answers received that do not match a known Hop-by-Hop Identifier. Processing received Answers A Diameter client or proxy MUST match the Hop-by-Hop Identifier in an answer received against the list of pending requests.3) are also subjected to the above processing rules. it MUST modify the Result-Code AVP to contain the appropriate error in the message destined towards the access device as well as include the Error-Reporting-Host AVP and it MUST issue an STR on behalf of the access device. it MUST NOT modify the contents of the AVP. Relaying and Proxying Answers If the answer is for a request which was proxied or relayed. 6. Origin-Host AVP The Origin-Host AVP (AVP Code 264) is of type DiameterIdentity. Any additional local errors detected SHOULD be logged. Calhoun.2.RFC 3588 Diameter Based Protocol September 2003 Note that the error messages (see Section 7. The corresponding message should be removed from the list of pending requests.1. This AVP identifies the endpoint that originated the Diameter message. 6. but not reflected in the Result-Code AVP. the AVP MUST be removed before the answer is forwarded.2. If the last Proxy-Info AVP in the message is targeted to the local Diameter server. If the agent receives an answer message with a Result-Code AVP indicating success. 6. If a relay or proxy agent receives an answer with a Result-Code AVP indicating a failure. The value of the Origin-Host AVP is guaranteed to be unique within a single host. Relay agents MUST NOT modify this AVP. and it wishes to modify the AVP to indicate an error. et al. This AVP SHOULD be placed as close to the Diameter header as possible.10 6. and MUST NOT be present in Answer messages.7. The absence of the Destination-Host AVP will cause a message to be sent to any Diameter server supporting the application within the realm specified in Destination-Realm AVP. This AVP contains the Realm of the originator of any Diameter message and MUST be present in all messages. and therefore MUST NOT be protected by end-to-end security. Diameter servers initiating a request message use the value of the Origin-Realm AVP from a previous message received from the intended target host (unless it is known a priori). Routing AVPs The AVPs defined in this section are Diameter AVPs used for routing purposes. Calhoun. Destination-Realm AVP The Destination-Realm AVP (AVP Code 283) is of type DiameterIdentity.5. MAY be present in request messages. 6. 6. Diameter Clients insert the realm portion of the User-Name AVP. This AVP MUST be present in all unsolicited agent initiated messages. This AVP SHOULD be placed as close to the Diameter header as possible. This AVP SHOULD be placed as close to the Diameter header as possible. Origin-Realm AVP The Origin-Realm AVP (AVP Code 296) is of type DiameterIdentity. 6.4. 6.6. When present. Standards Track [Page 78] .RFC 3588 Diameter Based Protocol September 2003 This AVP SHOULD be placed as close to the Diameter header as possible. and contains the realm the message is to be routed to. Request messages whose ABNF does not list the Destination-Realm AVP as a mandatory AVP are inherently non-routable messages. These AVPs change as Diameter messages are processed by agents. Destination-Host AVP The Destination-Host AVP (AVP Code 293) is of type DiameterIdentity. The Destination-Realm AVP MUST NOT be present in Answer messages. the Destination-Realm AVP is used to perform message routing decisions. et al. Acct-Application-Id AVP The Acct-Application-Id AVP (AVP Code 259) is of type Unsigned32 and is used in order to advertise support of the Accounting portion of an application (see Section 2. Auth-Application-Id AVP The Auth-Application-Id AVP (AVP Code 258) is of type Unsigned32 and is used in order to advertise support of the Authentication and Authorization portion of an application (see Section 2. et al. Calhoun.8. and MUST be treated as opaque data. Proxy-State AVP The Proxy-State AVP (AVP Code 33) is of type OctetString. This AVP contains the identity of the host that added the Proxy-Info AVP. Proxy-Info AVP The Grouped The Proxy-Info AVP (AVP Code 284) is of type Grouped. 6. Data field has the following ABNF grammar: Proxy-Info ::= < { { * [ 6.7.RFC 3588 Diameter Based Protocol September 2003 6. Inband-Security-Id AVP The Inband-Security-Id AVP (AVP Code 299) is of type Unsigned32 and is used in order to advertise support of the Security portion of the application. and contains state local information. The Auth-Application-Id MUST also be present in all Authentication and/or Authorization messages that are defined in a separate Diameter specification and have an Application ID assigned. Proxy-Host AVP AVP Header: 284 > Proxy-Host } Proxy-State } AVP ] The Proxy-Host AVP (AVP Code 280) is of type DiameterIdentity.7. 6.9.4.4).10.4). 6.7.1. 6.2. Route-Record AVP The Route-Record AVP (AVP Code 282) is of type DiameterIdentity. The Acct-Application-Id MUST also be present in all Accounting messages. 6. The identity added in this AVP MUST be the same as the one received in the Origin-Host of the Capabilities Exchange message.7. Exactly one of the AuthApplication-Id and Acct-Application-Id AVPs MAY be present. Standards Track [Page 79] .3. Redirect-Host-Usage AVP The Redirect-Host-Usage AVP (AVP Code 261) is of type Enumerated. TLS 1 This node supports TLS security. the receiving Diameter node SHOULD forward the request directly to one of the hosts identified in these AVPs. Upon receiving the above. but there is ample room to add new security Ids. This is the default value. as defined by [TLS]. The server contained in the selected Redirect-Host AVP SHOULD be used for all messages pertaining to this session.RFC 3588 Diameter Based Protocol September 2003 Currently. Vendor-Specific-Application-Id AVP 6. Calhoun. This AVP MAY be present in answer messages whose ’E’ bit is set and the Result-Code AVP is set to DIAMETER_REDIRECT_INDICATION. NO_INBAND_SECURITY 0 This peer does not support TLS. et al.12. if the AVP is omitted. the following values are supported. This AVP SHOULD be placed as close to the Diameter header as possible. This AVP MUST also be present as the first AVP in all experimental commands defined in the vendor-specific application.13. The Vendor-Specific-Application-Id AVP (AVP Code 260) is of type Grouped and is used to advertise support of a vendor-specific Diameter Application. 6. Exactly one of the Auth-Application-Id and Acct-Application-Id AVPs MAY be present. Standards Track [Page 80] .. AVP Format <Vendor-Specific-Application-Id> ::= < 1* [ 0*1{ 0*1{ 6.11. 14. the Result-Code AVP is set to DIAMETER_REDIRECT_INDICATION and the Redirect-Host-Usage AVP set to a non-zero value. ALL_APPLICATION 4 All messages for the application requested MAY be sent to the host specified in the Redirect-Host AVP. Note that once a host created due to a redirect indication is no longer reachable. ALL_SESSION 1 All messages within the same session. This is the default value. as defined by the same value of the Session-ID AVP MAY be sent to the host specified in the Redirect-Host AVP. et al. Calhoun. This AVP MUST be present in answer messages whose ’E’ bit is set. created as a result of the Redirect-Host. ALL_REALM 2 All messages destined for the realm requested MAY be sent to the host specified in the Redirect-Host AVP. Redirect-Max-Cache-Time AVP The Redirect-Max-Cache-Time AVP (AVP Code 262) is of type Unsigned32. Standards Track [Page 81] . ALL_HOST 5 All messages that would be sent to the host that generated the Redirect-Host MAY be sent to the host specified in the RedirectHost AVP. any associated peer and routing table entries MUST be deleted. This AVP contains the maximum number of seconds the peer and route table entries.RFC 3588 Diameter Based Protocol September 2003 When present. The following values are supported: DONT_CACHE 0 The host specified in the Redirect-Host AVP should not be cached. 6. REALM_AND_APPLICATION 3 All messages for the application requested to the realm specified MAY be sent to the host specified in the Redirect-Host AVP. ALL_USER 6 All messages for the user requested MAY be sent to the host specified in the Redirect-Host AVP. will be cached. this AVP dictates how the routing entry resulting from the Redirect-Host is to be used. each proxy or relay agent MAY take action on the message. When the message is received by Relay 2.. E2E-Sequence AVP The E2E-Sequence AVP (AVP Code 300) provides anti-replay protection for end to end messages and is of type grouped.g. et al. message routing error). Application errors.15. Error Handling There are two different types of errors in Diameter.g. Standards Track [Page 82] . Given that this error falls Calhoun. an answer message is returned with the ’E’ bit set. protocol and application errors. and it detects that it cannot forward the request to the home server.RFC 3588 Diameter Based Protocol September 2003 6. Request +---------+ Link Broken +-------------------------->|Diameter |----///----+ | +---------------------| | v +------+--+ | 2. This AVP MUST be included in all messages which use end-to-end protection (e. generally occur due to a problem with a function specified in a Diameter application (e. 7. on the other hand. As the answer is sent back towards the originator of the request. Missing AVP). and MAY require per hop attention (e. It contains a random value (an OctetString with a nonce) and counter (an Integer). CMS signing or encryption).. Request |Diameter | +--------+ +-------------------->| | ^ | Relay 3 |-----------+ +---------+ Figure 7: Example of Protocol Error causing answer message Figure 7 provides an example of a message forwarded upstream by a Diameter relay.g. Result-Code AVP values that are used to report protocol errors MUST only be present in answer messages whose ’E’ bit is set. 1. For each end-to-end peer with which a node communicates (or remembers communicating) a different nonce value MUST be used and the counter is initiated at zero and increases by one each time this AVP is emitted to that peer.. and the Result-Code AVP is set to the appropriate protocol error value. an answer message is returned with the ’E’ bit set and the Result-Code AVP set to DIAMETER_UNABLE_TO_DELIVER. A protocol error is one that occurs at the base protocol level. When a request message is received that causes a protocol error. answer + ’E’ set | Relay 2 | +--------+ |Diameter |<-+ (Unable to Forward) +---------+ |Diameter| | | | Home | | Relay 1 |--+ +---------+ | Server | +---------+ | 3. user authentication. Answer +---------+ (Missing AVP) (Missing AVP) Figure 8: Example of Application Error Answer message Figure 8 provides an example of a Diameter message that caused an application error. Answer +---------+ 3. causes an answer to be sent with the Result-Code AVP set to DIAMETER_AVP_UNSUPPORTED. In these cases. the Diameter node that sets the Result-Code AVP to indicate the error MUST add the AVPs. Request +---------+ 2. and adds the Result-Code AVP with the proper value. and given the error. and the Failed-AVP AVP containing the offending AVP. An AVP that is received with an unrecognized value causes an answer to be returned with the Result-Code AVP set to DIAMETER_INVALID_AVP_VALUE. In case there are multiple errors. the Diameter node MUST report only the first error it encountered Calhoun. and therefore the message would be forwarded back to the originator of the request. Examples are: An unrecognized AVP is received with the ’M’ bit (Mandatory bit) set. - - The Result-Code AVP describes the error that the Diameter node encountered in its processing. Standards Track [Page 83] . Request +---------+ | Access |------------>|Diameter |------------>|Diameter | | | | | | Home | | Device |<------------| Relay |<------------| Server | +---------+ 4. and creates an AVP with the AVP Code and other fields set as expected in the missing AVP. A command is received with an AVP that is omitted. Application errors do not require any proxy or relay agent involvement. the Diameter entity reporting the error clears the ’R’ bit in the Command Flags. The created AVP is then added to the FailedAVP AVP.RFC 3588 Diameter Based Protocol September 2003 within the protocol error category. Relay 1 would take special action. with the Failed-AVP AVP containing the AVP causing the error. yet is mandatory according to the command’s ABNF. et al. attempt to route the message through its alternate Relay 3. When application errors occur. There are certain Result-Code AVP application errors that require additional AVPs to be present in the answer. The receiver issues an answer with the Result-Code set to DIAMETER_MISSING_AVP. +---------+ 1. all identified by the thousands digit in the decimal notation: 1xxx 2xxx 3xxx 4xxx 5xxx (Informational) (Success) (Protocol Errors) (Transient Failures) (Permanent Failure) A non-recognized class (one whose first digit is not defined in this section) MUST be handled as a permanent failure.RFC 3588 Diameter Based Protocol September 2003 (detected possibly in some implementation dependent order). DIAMETER_SUCCESS 2001 The Request was successfully completed.1.1. and a subsequent request needs to be issued in order for access to be granted.1. A non-successful Result-Code AVP (one containing a non 2xxx value other than DIAMETER_REDIRECT_INDICATION) MUST include the Error-Reporting-Host AVP if the host setting the Result-Code AVP is different from the identity encoded in the Origin-Host AVP. Informational Errors that fall within this category are used to inform the requester that a request could not be satisfied.2. Standards Track [Page 84] .1.4). 7. The specific errors that can be described by this AVP are described in the following section. Success Errors that fall within the Success category are used to inform a peer that a request has been successfully completed. Result-Code AVP The Result-Code AVP (AVP Code 268) is of type Unsigned32 and indicates whether a particular request was completed successfully or whether an error occurred. All Diameter answer messages defined in IETF applications MUST include one Result-Code AVP. DIAMETER_MULTI_ROUND_AUTH 1001 This informational error is returned by a Diameter server to inform the access device that the authentication mechanism being used requires multiple round trips. et al. and additional action is required on its part before access is granted. Calhoun. 7. 7. Diameter provides the following classes of errors. The Result-Code data field contains an IANA-managed 32-bit address space representing errors (see Section 11. RFC 3588 Diameter Based Protocol September 2003 DIAMETER_LIMITED_SUCCESS 2002 When returned. This MUST be used when a Diameter node receives an experimental command that it does not understand. either because no host within the realm supporting the required application was available to process the request. The message MAY be sent to an alternate peer. but additional processing is required by the application in order to provide service to the user. Protocol Errors Errors that fall within the Protocol Error category SHOULD be treated on a per-hop basis. Calhoun.3. DIAMETER_REALM_NOT_SERVED 3003 The intended realm of the request is not recognized. DIAMETER_UNABLE_TO_DELIVER 3002 This error is given when Diameter can not deliver the message to the destination. a Diameter node SHOULD attempt to send the message to an alternate peer. DIAMETER_APPLICATION_UNSUPPORTED 3007 A request was sent for an application that is not supported. but the peer reporting the error has identified a configuration problem. Standards Track [Page 85] . DIAMETER_LOOP_DETECTED 3005 An agent detected a loop while trying to get the message to the intended recipient. the request was successfully completed. whose contact information has been added to the response. if it is possible. DIAMETER_TOO_BUSY 3004 When returned. When set. DIAMETER_COMMAND_UNSUPPORTED 3001 The Request contained a Command-Code that the receiver did not recognize or support. et al. the Redirect-Host AVP MUST be present. or because Destination-Host AVP was given without the associated Destination-Realm AVP. and Diameter proxies MAY attempt to correct the error. Note that these and only these errors MUST only be used in answer messages whose ’E’ bit is set. if one is available. This error MUST only be used when a specific server is requested.1. and it cannot provide the requested service. DIAMETER_REDIRECT_INDICATION 3006 A redirect agent has determined that the request could not be satisfied locally and the initiator of the request should direct the request directly to the server. 7. 5.. Standards Track [Page 86] . DIAMETER_AUTHENTICATION_REJECTED 4001 The authentication process for the user failed. 7. DIAMETER_AVP_UNSUPPORTED 5001 The peer received a message that contained an AVP that is not recognized or supported and was marked with the Mandatory bit. DIAMETER_UNKNOWN_PEER 3010 A CER was received from an unknown peer. Further attempts MUST only be tried after prompting the user for a new password.1. et al. or that is inconsistent with the AVP’s definition. Calhoun.4. DIAMETER_OUT_OF_SPACE 4002 A Diameter node received the accounting request but was unable to commit it to stable storage due to a temporary lack of space. 7. Transient Failures Errors that fall within the transient failures category are used to inform a peer that the request could not be satisfied at the time it was received. most likely due to an invalid password used by the user. DIAMETER_INVALID_AVP_BITS 3009 A request was received that included an AVP whose flag bits are set to an unrecognized value. or to a value that is inconsistent with the command code’s definition.1. DIAMETER_UNKNOWN_SESSION_ID 5002 The request contained an unknown Session-Id. Permanent Failures Errors that fall within the permanent failures category are used to inform the peer that the request failed. but MAY be able to satisfy the request in the future. and should not be attempted again. A Diameter message with this error MUST contain one or more FailedAVP AVP containing the AVPs that caused the failure.] an AVP with the missing AVP code. Its Data field has the following ABNF grammar: AVP Format Experimental-Result ::= < AVP Header: 297 > { Vendor-Id } { Experimental-Result-Code } The Vendor-Id AVP (see Section 5. It is recommended that vendor-specific result codes follow the same conventions given for the Result-Code AVP regarding the different types of result codes and the handling of errors (for non 2xxx values).3. Calhoun. and indicates whether a particular vendor-specific request was completed successfully or whether an error occurred.3) in this grouped AVP identifies the vendor responsible for the assignment of the result code which follows. The second only makes use of accounting. Experimental-Result AVP The Experimental-Result AVP (AVP Code 297) is of type Grouped. containing the entire AVP that could not be processed successfully.RFC 3588 Diameter Based Protocol September 2003 A Diameter message MAY contain one Failed-AVP AVP. and a zero filled payload of the minimum required length for the omitted AVP will be added. the missing vendor id. et al. AVP Format <Failed-AVP> ::= < AVP Header: 279 > 1* {AVP} 7. Diameter User Sessions Diameter can provide two different types of services to applications. Experimental-Result-Code AVP The Experimental-Result-Code AVP (AVP Code 298) is of type Unsigned32 and contains a vendor-assigned value representing the result of processing the request. and can optionally make use of accounting.7. Standards Track [Page 90] . 7.6. The first involves authentication and authorization. 8. If the failure reason is omission of a required AVP. All Diameter answer messages defined in vendor-specific applications MUST include either one Result-Code AVP or one Experimental-Result AVP. and it is willing to extend the authorization via a future request..RFC 3588 Diameter Based Protocol September 2003 When a service makes use of the authentication and/or authorization portion of an application. combined with the Auth-Grace-Period AVP. it MUST add the AuthorizationLifetime AVP to the answer message. which is used in subsequent messages (e. However. Note that if payment for services is expected by the serving realm from the user’s home realm.. the Diameter client issues an auth request to its local server. Calhoun. Services provided past the expiration of the Authorization-Lifetime and Auth-Grace-Period AVPs are the responsibility of the access device. These are used to allow servers that maintain state information to free resources. When a Diameter server authorizes a user to use network resources for a finite amount of time. subsequent authorization. and a user requests access to the network. Note that the value NO_STATE_MAINTAINED MUST NOT be set in subsequent reauthorization requests and answers. et al. after which the server will release all state information related to the user’s session. The Authorization-Lifetime AVP defines the maximum number of seconds a user MAY make use of the resources before another authorization request is expected by the server. the access device MUST follow the server’s directives. If the answer message from the server contains a different value in the Auth-Session-State AVP (or the default value if the AVP is absent). The Session-Id AVP is a means for the client and servers to correlate a Diameter message with a user session. Standards Track [Page 91] . the actual cost of services rendered is clearly outside the scope of the protocol. Of course. If the server accepts the hint. the base protocol does define a set of messages that is used to terminate user sessions. implies the maximum length of the session the home realm is willing to be fiscally responsible for. The Auth-Grace-Period AVP contains the number of seconds following the expiration of the Authorization-Lifetime.g. The auth request is defined in a service specific Diameter application (e.g. the Authorization-Lifetime AVP. The base protocol does not include any authorization request messages. since these are largely application-specific and are defined in a Diameter application document. NASREQ). An access device that does not expect to send a re-authorization or a session termination request to the server MAY include the AuthSession-State AVP with the value set to NO_STATE_MAINTAINED as a hint to the server. The request contains a Session-Id AVP. it cannot maintain state for the session. it agrees that since no session termination message will be received once service to the user is terminated. etc) relating to the user’s session. accounting. 1.. representing the life cycle of Diameter sessions. Standards Track [Page 92] . MUST be returned to the originator of the message. The term Service-Specific below refers to a message defined in a Diameter application (e. Calhoun. Here again. Authorization Session State Machine This section contains a set of finite state machines. the other from a server perspective. the session termination messages are not used. Any event not listed in the state machines MUST be considered as an error condition. The second two state machines are used when the server does not maintain session state.RFC 3588 Diameter Based Protocol September 2003 When a service only makes use of the Accounting portion of the Diameter protocol. However. NASREQ). the event ’Failure to send X’ means that the Diameter agent is unable to send command X to the desired destination. indicated by the value of the Auth-Session-State AVP (or its absence). any resources that were allocated for the particular session must be released.g. When a session is moved to the Idle state. one describes the session from a client perspective. There are four different authorization session state machines supported in the Diameter base protocol. et al. The event ’X successfully sent’ is the complement of ’Failure to send X’. if applicable. The first two describe a session in which the server is maintaining session state. since a session is signaled as being terminated by issuing an accounting stop message. Mobile IPv4. This could be due to the peer being down. and which MUST be observed by all Diameter implementations that make use of the authentication and/or authorization portion of a Diameter application. even in combination with an application. or due to the peer sending back a transient failure or temporary protocol error notification DIAMETER_TOO_BUSY or DIAMETER_LOOP_DETECTED in the Result-Code AVP of the corresponding Answer command. In the state table. the other from a server perspective. the Session-Id is still used to identify user sessions. 8. One describes the session from a client perspective. and an answer. RFC 3588 Diameter Based Protocol September 2003 The following state machine is observed by a client when state is maintained on the server: CLIENT. Session-Timeout Expires on Access Device Open Open Discon.. et al. Standards Track [Page 93] . Send ASA Open with Result-Code != SUCCESS Send STR Discon Open ASR Received. and user successful is authorized serv. and successful user is authorized serv. is not authorized specific answer. specific answer Service-specific authorization Send Idle request received. et al. and user failed serv. and failed serv.RFC 3588 Diameter Based Protocol September 2003 Open ASR Received. STATEFUL State Event Action New State ------------------------------------------------------------Idle Service-specific authorization Send Open request received. client will not comply with request to end the session Open Authorization-Lifetime + Auth-Grace-Period expires on access device ASR Received STA Received Discon Discon Send ASA Discon Discon. Cleanup Home server wants to terminate the service Send ASR Discon Open Open Open Calhoun. Send STR. Standards Track [Page 94] . client will comply with request to end the session Send ASA Discon with Result-Code = SUCCESS. specific answer Idle Service-specific authorization Send Idle request received. Idle user/device The following state machine is observed by a server when it is maintaining state for the session: SERVER. user is not authorized specific answer Service-specific authorization Send Open request received.. STR Received Send STA. Discon resend ASR Cleanup Idle Discon ASR successfully sent and ASA Received with Result-Code ASA Received Not Discon Any None No Change. Session-Timeout expires on home server Failure to send ASR Cleanup Idle Open Cleanup Idle Discon Wait. Cleanup. Idle The following state machine is observed by a client when state is not maintained on the server: CLIENT.RFC 3588 Diameter Based Protocol September 2003 Open Authorization-Lifetime (and Auth-Grace-Period) expires on home server. Standards Track [Page 95] . Idle user/device Discon. et al. Idle user/device Open Calhoun. Otherwise. et al. STATELESS State Event Action New State ------------------------------------------------------------Idle Service-specific authorization Send serv. Both base Diameter AVPs as well as application specific AVPs MAY be inspected as a part of these tasks. and so on. fraud detection. The Diameter base protocol defines a default state machine that MUST be followed by all applications that have not specified other state machines.2. the use of this state machine is recommended only in applications where the value of the Accounting-Realtime-Required AVP is DELIVER_AND_GRANT. ordering. they are not standardized by the Diameter specifications. credit limits checks. See Section 9. and specific successfully processed answer 8. as these tasks are typically application or even policy dependent. However.7 for Accounting Command Codes and Section 9. The first state machine is to be observed by clients. Therefore. However. This is the second state machine in this section described below. Implementations of Diameter MAY perform checking.RFC 3588 Diameter Based Protocol September 2003 The following state machine is observed by a server when it is not maintaining state for the session: SERVER. and hence accounting connectivity problems are required to cause the serviced user to be disconnected. Accounting Session State Machine The following state machines MUST be supported for applications that have an accounting portion or that require only accounting services. the Diameter base protocol defines one optional server side state machine that MAY be followed by applications that require keeping track of the session state at the accounting server. Idle request received. Note that such tracking is incompatible with the ability to sustain long duration connectivity problems. and other tasks based on these records. Applications MAY define requirements on when to accept accounting records based on the used value of Accounting-Realtime-Required AVP. Standards Track [Page 96] . records produced by the client Calhoun. correlation. The server side in the accounting state machine depends in some cases on the particular application. and does not place any standards requirement on the processing of these records. The tasks can happen either immediately after record reception or in a post-processing phase.8 for Accounting AVPs. The default server side state machine requires the reception of accounting records in any order and at any time. or due to the peer sending back a transient failure or temporary protocol error notification DIAMETER_OUT_OF_SPACE.g. the event ’Failure to send’ means that the Diameter client is unable to communicate with the desired destination. In the state table. Ts MAY be set to two times the value of the Acct_Interim_Interval so as to avoid the accounting session in the Diameter server to change to Idle state in case of short transient network failure. PendingE and PendingB stand for pending states to wait for an answer to an accounting request related to a Start. or DIAMETER_LOOP_DETECTED in the Result-Code AVP of the Accounting Answer command. respectively. ACCOUNTING State Event Action New State ------------------------------------------------------------Idle Client or device requests Send PendingS access accounting start req. Stop. which the value should be reasonably higher than the Acct_Interim_Interval value. This state machine is the third state machine in this section. The state machine is supervised by a supervision session timer Ts. DIAMETER_TOO_BUSY. Any event not listed in the state machines MUST be considered as an error condition.RFC 3588 Diameter Based Protocol September 2003 may be lost by the server which no longer accepts them after the connectivity is re-established. This could be due to the peer being down. CLIENT. Note that the action ’Disconnect user/dev’ MUST have an effect also to the authorization session state table. The states PendingS. if applicable. Interim. e. MUST be returned to the originator of the message. PendingI. et al. if the given application has both authentication/authorization and accounting portions. PendingL. cause the STR message to be sent. Idle Client or device requests a one-time service Send PendingE accounting event req Send record PendingB Idle Records in storage Calhoun. Standards Track [Page 97] .. The event ’Failed answer’ means that the Diameter client received a non-transient failure notification in the Accounting Answer command. and a corresponding answer. Event or buffered record.. Standards Track [Page 98] .. Send Idle accounting start answer Send Idle accounting event answer Send Idle accounting interim answer Send Idle accounting stop answer Send Idle accounting answer. STATELESS ACCOUNTING State Event Action New State ------------------------------------------------------------Idle Accounting start request received. Idle Interim record received. et al. STATEFUL ACCOUNTING State Event Action New State ------------------------------------------------------------Idle Accounting start request received. Calhoun. and successfully processed Accounting request received. Send Open accounting start answer. no space left to store records Idle SERVER. and successfully processed. Result-Code = OUT_OF_ SPACE Idle Accounting event request received. Idle Accounting stop request received. and successfully processed. and successfully processed. Standards Track [Page 100] .RFC 3588 Diameter Based Protocol September 2003 SERVER. Start Ts Send Idle accounting event answer Idle Accounting event request received. and successfully processed. and successfully processed. Server-Initiated Re-Auth A Diameter server may initiate a re-authentication and/or reauthorization service for a particular session by issuing a Re-AuthRequest (RAR). the Diameter server that originally authorized a session may need some confirmation that the user is still using the services. Open Accounting stop request received. and successfully processed Open Accounting request received. An access device that receives a RAR message with Session-Id equal to a currently active session MUST initiate a re-auth towards the user. Calhoun. since some applications do not allow access devices to prompt the user for re-auth. Stop Ts Send Idle accounting answer. Stop Ts Stop Ts Idle Open Interim record received. Standards Track [Page 101] . Result-Code = OUT_OF_ SPACE Send Open accounting interim answer. et al. if the service supports this particular feature. Result-Code = OUT_OF_ SPACE. For example. no space left to store records Send Idle accounting answer. for pre-paid services.3. Each Diameter application MUST state whether service-initiated re-auth is supported. Restart Ts Send Idle accounting stop answer. no space left to store records Open Session supervision timer Ts expired 8. and successfully processed.RFC 3588 Diameter Based Protocol September 2003 Idle Accounting request received. may be sent by any server to the access device that is providing session service.2. Calhoun. Standards Track [Page 102] . to request that the user be re-authenticated and/or re-authorized. indicated by the Command-Code set to 258 the message flags’ ’R’ bit clear. Result-Code AVP MUST be present. Re-Auth-Request The Re-Auth-Request (RAR).1. indicated by the Command-Code set to 258 and the message flags’ ’R’ bit set.3. et al. Message Format <RAR> ::= < < { { { { { { [ [ * [ * [ * [ Diameter Header: 258. REQ. The and The the Re-Auth-Answer Re-Auth-Answer (RAA). A successful RAA message MUST be followed by an application-specific authentication and/or authorization message.RFC 3588 Diameter Based Protocol September 2003 8. is sent in response to the RAR. PXY > Session-Id > Origin-Host } Origin-Realm } Destination-Realm } Destination-Host } Auth-Application-Id } Re-Auth-Request-Type } User-Name ] Origin-State-Id ] Proxy-Info ] Route-Record ] AVP ] 8. and indicates the disposition of request.3. For sessions whose state is not being maintained. orderly shutdown of the access device. or because the access device does not support a mandatory AVP returned in the authorization.RFC 3588 Diameter Based Protocol September 2003 Message Format <RAA> ::= < < { { { [ [ [ [ * [ * [ [ [ * [ * [ Diameter Header: 258. expiration of Session-Timeout. et al. An STR MUST be issued when a user session terminates for any reason. etc. this section is not used. the access device that provided the service MUST issue a SessionTermination-Request (STR) message to the Diameter server that authorized the service. for example. Session Termination It is necessary for a Diameter server that authorized a session. It is also possible that a session that was authorized is never actually started due to action of a proxy.4. This could occur. for which it is maintaining state. If the answer did not contain an Auth-Session-State AVP with the value Calhoun. PXY > Session-Id > Result-Code } Origin-Host } Origin-Realm } User-Name ] Origin-State-Id ] Error-Message ] Error-Reporting-Host ] Failed-AVP ] Redirect-Host ] Redirect-Host-Usage ] Redirect-Host-Cache-Time ] Proxy-Info ] AVP ] 8. administrative action. or because the access device is unwilling to provide the type of service requested in the authorization. to notify it that the session is no longer active. to be notified when that session is no longer active. due to a sudden resource shortage in the access device. converting the result from success to failure. For example. prior to forwarding the message to the access device. The access device also MUST issue an STR for a session that was authorized but never actually started. termination upon receipt of an Abort-SessionRequest (see below). including user logoff. When a user session that required Diameter authorization terminates. Standards Track [Page 103] . etc. a proxy may modify an authorization answer. both for tracking purposes as well as to allow stateful agents to release any resources that they may have provided for the user’s session. session state) associated with the Session-Id specified in the STR. PXY > Session-Id > Origin-Host } Origin-Realm } Destination-Realm } Auth-Application-Id } Termination-Cause } User-Name ] Destination-Host ] Class ] Origin-State-Id ] Proxy-Info ] Route-Record ] AVP ] Calhoun. Message Format <STR> ::= < < { { { { { [ [ * [ [ * [ * [ * [ Diameter Header: 275. REQ.. Standards Track [Page 104] . or when the Authorization-Lifetime and the AuthGrace-Period AVPs expires without receipt of a re-authorization request. and return a Session-Termination-Answer. expiration of either of these timers implies that the access device may have unexpectedly shut down. regardless of whether an STR for that session is received. The access device is not expected to provide service beyond the expiration of these timers. A Diameter server also MUST clean up resources when the SessionTimeout expires.g. indicated by the Command-Code set to 275 and the Command Flags’ ’R’ bit set. A Diameter server that receives an STR message MUST clean up resources (e. et al. 8.RFC 3588 Diameter Based Protocol September 2003 NO_STATE_MAINTAINED.1. a proxy that causes an authorized session not to be started MUST issue an STR to the Diameter server that authorized the session. is sent by the access device to inform the Diameter Server that an authenticated and/or authorized session is being terminated.4. Session-Termination-Request The Session-Termination-Request (STR). thus. since the access device has no way of knowing that the session had been authorized. the Diameter Server MUST release all resources for the session indicated by the Session-Id AVP. Whether the access Calhoun.4. indicated by the Command-Code set to 275 and the message flags’ ’R’ bit clear. et al. The Result-Code AVP MUST be present.. Session-Termination-Answer The Session-Termination-Answer (STA). Upon sending or receipt of the STA. and MAY contain an indication that an error occurred while servicing the STR. an operator may maintain a management server for the purpose of issuing ASRs to administratively remove users from the network. An access device that receives an ASR with Session-ID equal to a currently active session MAY stop the session. Any intermediate server in the Proxy-Chain MAY also release any resources. if necessary.RFC 3588 Diameter Based Protocol September 2003 8. On the other hand.2. For example. Standards Track [Page 105] . PXY > Session-Id > Result-Code } Origin-Host } Origin-Realm } User-Name ] Class ] Error-Message ] Error-Reporting-Host ] Failed-AVP ] Origin-State-Id ] Redirect-Host ] Redirect-Host-Usage ] ^ [ Redirect-Max-Cache-Time ] * [ Proxy-Info ] * [ AVP ] 8.5. Message Format <STA> ::= < < { { { [ * [ [ [ * [ [ * [ [ Diameter Header: 275. is sent by the Diameter Server to acknowledge the notification that the session has been terminated. indicated by the Command-Code set to 274 and the message flags’ ’R’ bit clear.1. an access device may honor ASRs from certain agents only. Result-Code is set to DIAMETER_UNKNOWN_SESSION_ID. If the access device does not stop the session for any other reason. Note that if the access device does stop the session upon receipt of an ASR. Result-Code is set to DIAMETER_SUCCESS. et al.5.2. REQ. the access device MUST respond with an Abort-Session-Answer. Message Format <ASR> ::= < < { { { { { [ [ * [ * [ * [ Diameter Header: 274. Abort-Session-Answer The Abort-Session-Answer (ASA). 8.RFC 3588 Diameter Based Protocol September 2003 device stops the session or not is implementation. The 274 the the Abort-Session-Request Abort-Session-Request (ASR). indicated by the Command-Code set to and the message flags’ ’R’ bit set. If the session is not currently active. For example. PXY > Session-Id > Origin-Host } Origin-Realm } Destination-Realm } Destination-Host } Auth-Application-Id } User-Name ] Origin-State-Id ] Proxy-Info ] Route-Record ] AVP ] 8. Calhoun.and/or configuration-dependent. may be sent by any server to access device that is providing session service. Result-Code is set to DIAMETER_UNABLE_TO_COMPLY. including a Result-Code AVP to indicate what action it took. If the session identified by Session-Id in the ASR was successfully terminated. and indicates the disposition of the request.5. In any case. is sent in response to the ASR. it issues an STR to the authorizing server (which may or may not be the agent issuing the ASR) just as it would if the session were terminated for any other reason. The Result-Code AVP MUST be present. Standards Track [Page 106] . to request that session identified by the Session-Id be stopped. to allow session state to be cleaned up globally. due to unanticipated shutdown of an access device. Standards Track [Page 107] . However. use of this mechanism across proxies is opportunistic rather than reliable. By including Origin-State-Id in CER/CEA messages.6. Calhoun. it may assume that the issuer has lost state since the previous message and that all sessions that were active under the lower Origin-StateId have been terminated. an access device also allows a server with which it communicates via proxy to make such a determination.. Thus. When a Diameter server receives an Origin-State-Id that is greater than the Origin-State-Id previously received from the same issuer. The Diameter server MAY clean up all session state associated with such lost sessions. and MAY also issues STRs for all such lost sessions that were authorized on upstream servers. but useful nonetheless. PXY > Session-Id > Result-Code } Origin-Host } Origin-Realm } User-Name ] Origin-State-Id ] Error-Message ] Error-Reporting-Host ] Failed-AVP ] Redirect-Host ] Redirect-Host-Usage ] Redirect-Max-Cache-Time ] Proxy-Info ] AVP ] 8. a server that is not directly connected with the access device will not discover that the access device has been restarted unless and until it receives a new request from the access device. By including Origin-State-Id in request messages.RFC 3588 Diameter Based Protocol September 2003 Message Format <ASA> ::= < < { { { [ [ [ [ * [ * [ [ [ * [ * [ Diameter Header: 274. et al. " character. and MUST contain the relevant application specific authentication AVPs that are needed by the Diameter server to authenticate the user. The request MUST include both the relevant application specific authentication information. authorized only or both. and authorization information necessary to identify the service being requested/offered.7. The Session-Id MUST begin with the sender’s identity encoded in the DiameterIdentity type (see Section 4. and MUST contain the application specific authorization AVPs that are necessary to identify the service being requested/offered. The remainder of the Session-Id is delimited by a ". Note any value other than both MAY cause RADIUS interoperability issues. a recommended format for the implementation-defined portion is outlined below.8. AUTHORIZE_AUTHENTICATE 3 The request contains a request for both authentication and authorization. Auth-Request-Type AVP The Auth-Request-Type AVP (AVP Code 274) is of type Enumerated and is included in application-specific auth requests to inform the peers whether a user is to be authenticated only. Standards Track [Page 108] . and may be needed to correlate historical authentication information with accounting information. AUTHORIZE_ONLY 2 The request being sent is for authorization only. the Session-Id SHOULD appear immediately following the Diameter Header (see Section 3). as it is meant to uniquely identify a user session without reference to any other information. Session-Id AVP The Session-Id AVP (AVP Code 263) is of type UTF8String and is used to identify a specific session (see Section 8).RFC 3588 Diameter Based Protocol September 2003 8. however. 8. the following format is recommended. and MAY be any sequence that the client can guarantee to be eternally unique. All messages pertaining to a specific session MUST include only one Session-Id AVP and the same value MUST be used throughout the life of a session. The following values are defined: AUTHENTICATE_ONLY 1 The request being sent is for authentication only. (square brackets [] indicate an optional element): Calhoun. When present. The Session-Id includes a mandatory portion and an implementation-defined portion. et al. The Session-Id MUST be globally and eternally unique.4). 1. or a value of all ones (meaning all bits in the 32 bit field are set to one) means no re-auth is expected.com. etc. The absence of this AVP. the high 32 bits of the 64-bit value MAY be initialized to the time. Great care should be taken when the AuthorizationLifetime value is determined. If both this AVP and the Session-Timeout AVP are present in a message. This is typically used in cases where multiple authentication methods are used.<low 32 bits>[. Authorization-Lifetime AVP The Authorization-Lifetime AVP (AVP Code 291) is of type Unsigned32 and contains the maximum number of seconds of service to be provided to the user before the user is to be re-authenticated and/or reauthorized. 8. Note that a Session-Id MAY be used for both the authorization and accounting commands of a given application. in which there is an optional value: accesspoint7. <optional value> is implementation specific but may include a modem’s device Id. since a low. Standards Track [Page 109] .acme. the value of the latter MUST NOT be smaller than the Authorization-Lifetime AVP.1876543210. Calhoun.1876543210. an implementation MAY keep track of the increasing value in non-volatile memory. A value of zero (0) means that immediate re-auth is necessary by the access device. et al.88 The Session-Id is created by the Diameter application initiating the session. Alternatively.mobile@200. value could create significant Diameter traffic. non-zero. and a successful auth response with this AVP set to zero is used to signal that the next authentication method is to be immediately initiated. and the low 32 bits MAY be initialized to zero.<high 32 bits>. in which there is no optional value: accesspoint7. which could congest both the network and the agents. a layer 2 address.523 Example.RFC 3588 Diameter Based Protocol September 2003 <DiameterIdentity>. assuming the reboot process takes longer than a second.com. timestamp. which in most cases is done by the client.1.acme. This will for practical purposes eliminate the possibility of overlapping Session-Ids after a reboot. The 64-bit value is rendered in two part to simplify formatting by 32-bit processors.523.<optional value>] <high 32 bits> and <low 32 bits> are decimal representations of the high and low 32 bits of a monotonically increasing 64-bit value. Example. At startup.9. et al.12. 8.RFC 3588 Diameter Based Protocol September 2003 An Authorization-Lifetime AVP MAY be present in re-authorization messages. the Re-Auth-Request-Type AVP MUST be present in an answer message. This is the default value. NO_STATE_MAINTAINED 1 This value is used to specify that no session termination messages will be sent by the access device upon expiration of the Authorization-Lifetime. The following values are supported: STATE_MAINTAINED 0 This value is used to specify that session state is being maintained. Auth-Session-State AVP The Auth-Session-State AVP (AVP Code 277) is of type Enumerated and specifies whether state is maintained for a particular session.10. or smaller. than the one provided by the client. the server MAY return a value that is equal to. but the value in the server’s answer message is binding. and the access device MUST issue a session termination message when service to the user is terminated. However. Auth-Grace-Period AVP The Auth-Grace-Period AVP (AVP Code 276) is of type Unsigned32 and contains the number of seconds the Diameter server will wait following the expiration of the Authorization-Lifetime AVP before cleaning up resources for the session. If the answer message contains an Authorization-Lifetime AVP with a positive value.11. 8. The following values are defined: Calhoun. The client MAY include this AVP in requests as a hint to the server. 8. Re-Auth-Request-Type AVP The Re-Auth-Request-Type AVP (AVP Code 285) is of type Enumerated and is included in application-specific auth answers to inform the client of the action expected upon expiration of the Authorization-Lifetime. Standards Track [Page 110] . This AVP MAY be provided by the client as a hint of the maximum lifetime that it is willing to accept. and contains the number of seconds the user is authorized to receive service from the time the re-auth answer message is received by the access device. A Session-Timeout AVP MAY be present in a re-authorization answer message. in a format consistent with the NAI specification [NAI]. et al. means that this session has an unlimited number of seconds before termination. unless both the access device and the home server had previously agreed that no session termination messages would be sent (see Section 8. the server MAY return a value that is equal to. Standards Track [Page 111] . A session that terminates on an access device due to the expiration of the Session-Timeout MUST cause an STR to be issued. than the one provided by the client. or the absence of this AVP. This AVP MAY be provided by the client as a hint of the maximum timeout that it is willing to accept. When both the Session-Timeout and the Authorization-Lifetime AVPs are present in an answer message. and contains the remaining number of seconds from the beginning of the re-auth. This is the default value if the AVP is not present in answer messages that include the AuthorizationLifetime. User-Name AVP The User-Name AVP (AVP Code 1) [RADIUS] is of type UTF8String. Session-Timeout AVP The Session-Timeout AVP (AVP Code 27) [RADIUS] is of type Unsigned32 and contains the maximum number of seconds of service to be provided to the user before termination of the session. AUTHORIZE_AUTHENTICATE 1 An authentication and authorization re-auth is expected upon expiration of the Authorization-Lifetime.9). or smaller.13. A value of zero.RFC 3588 Diameter Based Protocol September 2003 AUTHORIZE_ONLY 0 An authorization only re-auth is expected upon expiration of the Authorization-Lifetime.15. However. which contains the User-Name. the former MUST be equal to or greater than the value of the latter. 8. 8. 8. and is used to indicate the reason why a session was terminated on the access device. The following values are defined: Calhoun.14. Termination-Cause AVP The Termination-Cause AVP (AVP Code 295) is of type Enumerated. due to administrative reasons. DIAMETER_ADMINISTRATIVE 4 The user was not granted access. MUST reflect the state of the entity indicated by Origin-Host. of type Unsigned32.RFC 3588 Diameter Based Protocol September 2003 DIAMETER_LOGOUT 1 The user initiated a disconnect DIAMETER_SERVICE_NOT_PROVIDED 2 This value is used when the user disconnected prior to the receipt of the authorization answer message. DIAMETER_AUTH_EXPIRED 6 The user’s access was terminated since its authorized session time has expired. counter retained in non-volatile memory create a higher value for A Diameter entity MAY set or it MAY use an incrementing across restarts. DIAMETER_USER_MOVED 7 The user is receiving services from another access device. it MUST either remove Origin-State-Id or modify it appropriately as well. if present. DIAMETER_LINK_BROKEN 5 The communication to the user was abruptly disconnected. Origin-State-Id to the time of startup. such as the receipt of a Abort-SessionRequest message. Standards Track [Page 112] . Origin-State-Id MAY be included in any Diameter message. et al. or was disconnected. DIAMETER_SESSION_TIMEOUT 8 The user’s session has timed out. Calhoun. Origin-State-Id AVP The Origin-State-Id AVP (AVP Code 278). and service has been terminated. is a monotonically increasing value that is advanced whenever a Diameter entity restarts with loss of previous state.16. If a proxy modifies Origin-Host. for example upon reboot. The Origin-State-Id. DIAMETER_BAD_ANSWER 3 This value indicates that the authorization answer received by the access device was not processed successfully. 8. including CER. A Diameter entity issuing this AVP MUST this AVP each time its state is reset. 8. that is. it MUST either not include Origin-State-Id in any message. When cleared. STR 2 When set. 8. the Destination-Host AVP MUST be present in all re-auth messages for this session. Origin-State-Id is used by an access device that always starts up with no active sessions. this AVP MAY inform the Diameter client that all future application-specific re-auth messages for this session MUST be sent to the same authorization server. and MAY be present in application-specific authorization answer messages that either do not include the Session-Binding AVP or include the Session-Binding AVP with any of the bits set to a zero value.17. Session-Binding AVP The Session-Binding AVP (AVP Code 270) is of type Unsigned32. any session active prior to restart will have been lost. the default value. the Destination-Host AVP MUST be present in the STR message for this session. and the following bits have been defined: RE_AUTH 1 When set. or set its value to 0. When cleared. ACCOUNTING 4 When set. and MAY be present in application-specific authorization answer messages. the default value. Standards Track [Page 113] . If present. If an access device does not intend for such inferences to be made. the default value. the Destination-Host AVP. By including Origin-State-Id in a message. future re-auth messages for this session MUST NOT include the Destination-Host AVP. all accounting messages for this session MUST NOT include the Destination-Host AVP. When cleared. This field is a bit mask. it allows other Diameter entities to infer that sessions associated with a lower Origin-State-Id are no longer active. MUST be present in all accounting messages for this session. this AVP MAY inform the Diameter client that if a Calhoun.18. et al. Session-Server-Failover AVP The Session-Server-Failover AVP (AVP Code 271) is of type Enumerated. If present.RFC 3588 Diameter Based Protocol September 2003 Typically. the STR message for this session MUST NOT include the Destination-Host AVP. This AVP MAY also specify that a Session-Termination-Request message for this session MUST be sent to the same authorizing server. if known. the Diameter client SHOULD issue a subsequent message without the Destination-Host AVP. assume that re-authorization succeeded. assume re-authorization succeeded. Class AVPs found in a re-authorization answer message override the ones found in any previous authorization answer message.RFC 3588 Diameter Based Protocol September 2003 re-auth or STR message fails due to a delivery problem. and SHOULD be present in application-specific authorization answer messages whose Result-Code AVP is set to DIAMETER_MULTI_ROUND_AUTH. et al. When absent. resend the failed message without the Destination-Host AVP present. If STR message delivery fails. terminate the session. This AVP contains the maximum number of seconds that the access device MUST provide the user in responding to an authentication request. If the second delivery fails for re-auth. If the second delivery fails for STR. 8. TRY_AGAIN_ALLOW_SERVICE 3 If either the re-auth or the STR message delivery fails. Calhoun. The following values are supported: REFUSE_SERVICE 0 If either the re-auth or the STR message delivery fails. ALLOW_SERVICE 2 If re-auth message delivery fails. they MUST be present in subsequent re-authorization. Multi-Round-Time-Out AVP The Multi-Round-Time-Out AVP (AVP Code 272) is of type Unsigned32. Class AVP The Class AVP (AVP Code 25) is of type OctetString and is used to by Diameter servers to return state information to the access device. the default value is REFUSE_SERVICE. Standards Track [Page 114] . terminate service with the user. When one or more Class AVPs are present in application-specific authorization answer messages. Diameter server implementations SHOULD NOT return Class AVPs that require more than 4096 bytes of storage on the Diameter client. session termination and accounting messages. A Diameter client that receives Class AVPs whose size exceeds local available storage MUST terminate the session. and do not attempt any subsequent attempts.19. 8.20. TRY_AGAIN 1 If either the re-auth or the STR message delivery fails. terminate the session. resend the failed message without the Destination-Host AVP present. Standards Track [Page 115] . however. Note that batch accounting is not a requirement. based on its knowledge of the user and relationships of roaming partnerships. The server (or agents) uses the Acct-Interim-Interval and Accounting-Realtime-Required AVPs to control the operation of the Diameter peer operating as a client. Event-Timestamp AVP The Event-Timestamp (AVP Code 55) is of type Time. Accounting-Realtime-Required AVP is used to control the behavior of the client when the transfer of accounting records from the Diameter client is delayed or unsuccessful. Server Directed Model The server directed model means that the device generating the accounting data gets information from either the authorization server (if contacted) or the accounting server regarding the way accounting data shall be forwarded. instructs the Diameter node acting as a client to produce accounting records continuously even during a session.1.. or it could be handled using another protocol. Calhoun. transport protocols used under Diameter typically batch several requests in the same packet under heavy traffic conditions. Should batched accounting be required in the future.21. The Acct-Interim-Interval AVP. such as the need to perform credit limit checks and fraud detection. a new Diameter application will need to be created. that even if at the Diameter layer accounting requests are processed one by one. This may be sufficient for many applications. This information includes accounting record timeliness requirements. and MAY be included in an Accounting-Request and Accounting-Answer messages to record the time that the reported event occurred. and is therefore not supported by Diameter. when present. 9. Note. The authorization server (chain) directs the selection of proper transfer strategy. As discussed in [ACCMGMT].RFC 3588 Diameter Based Protocol September 2003 8. real-time transfer of accounting records is a requirement. in seconds since January 1. 1900 00:00 UTC. et al. 3. If IPsec and IKE are used to secure the Diameter session. A rejected Accounting-Request message MAY cause the user’s session to be terminated. the latest value received SHOULD be used in further accounting activities for the same session.2. 9. so only their respective service-specific AVPs need to be defined in this section. If TLS is used to secure the Diameter session.. 9.g. which MUST reply with the Accounting-Answer message to confirm reception. Diameter peers acting as agents or related off-line processing systems MUST detect duplicate accounting records caused by the sending of same record to several servers and duplication of messages Calhoun. which MAY indicate that an error was present in the accounting message. then IP compression [IPComp] MAY be used and IKE [IKE] MAY be used to negotiate the compression parameters. Fault Resilience Diameter Base protocol mechanisms are used to overcome small message loss and network faults of temporary nature. The application MUST assume that the AVPs described in this document will be present in all Accounting messages. When one of these AVPs is present. Standards Track [Page 116] . 9. Each Diameter Accounting protocol message MAY be compressed. MUST define their Service-Specific AVPs that MUST be present in the Accounting-Request message in a section entitled "Accounting AVPs". NASREQ. Protocol Messages A Diameter node that receives a successful authentication and/or authorization messages from the Home AAA server MUST collect accounting information for the session. MobileIP). in order to reduce network bandwidth usage. The Accounting-Answer message includes the Result-Code AVP. Diameter peers acting as clients MUST implement the use of failover to guard against server failures and certain network failures. depending on the value of the Accounting-Realtime-Required AVP received earlier for the session in question. et al. The Accounting-Request message is used to transmit the accounting information to the Home AAA server. Application document requirements Each Diameter application (e.RFC 3588 Diameter Based Protocol September 2003 The Diameter accounting server MAY override the interim interval or the realtime requirements by including the Acct-Interim-Interval or Accounting-Realtime-Required AVP in the Accounting-Answer message.4. then TLS compression [TLS] MAY be used. 5. When the initial Calhoun. It is an implementation dependent matter for the client to accept new sessions under this condition. then the AVP MUST use the values START_RECORD. If the accounted service is of a measurable length. the client SHOULD store new accounting records there as soon as the records are created and until a positive acknowledgement of their reception from the Diameter Server has been received. INTERIM_RECORD. Different types of accounting records are sent depending on the actual type of accounted service and the authorization server’s directions for interim accounting. then the Accounting-Record-Type AVP MUST be present and set to the value EVENT_RECORD. the client MUST starting sending the records in the non-volatile memory to the accounting server with appropriate modifications in termination cause. STOP_RECORD. and possibly. two accounting records MUST be generated for each service of type session. end-toend security may be used for authentication purposes. Diameter clients MAY have non-volatile memory for the safe storage of accounting records over reboots or extended network failures. The client MAY remove oldest. If the accounted service is a one-time event. This detection MUST be based on the inspection of the Session-Id and Accounting-Record-Number AVP pairs. Upon a reboot. If the authorization server has not directed interim accounting to be enabled for the session.RFC 3588 Diameter Based Protocol September 2003 in transit. et al. meaning that the start and stop of the event are simultaneous. The client SHOULD NOT remove the accounting data from any of its memory areas before the correct Accounting-Answer has been received. the Session-Id AVP MUST be present. undelivered or yet unacknowledged accounting data if it runs out of resources such as memory. session length. and server failures. the User-Name AVP MUST be present if it is available to the Diameter client. network partitions. Standards Track [Page 117] . If strong authentication across agents is required. 9. A further application of this protocol may include AVPs to control how many accounting records may at most be stored in the Diameter client without committing them to the non-volatile memory or transferring them to the Diameter server. Accounting Records In all accounting records. If such memory is available. Appendix C discusses duplicate detection needs and implementation issues. and other relevant information in the records. or several records starting with one having the value START_RECORD. but a different Accounting-Sub-Session-Id AVP. Such applications would send messages with a constant Session-Id AVP. In such cases. except for the purposes of retransmission. A particular value of Accounting-Sub-Session-Id MUST appear only in one sequence of accounting records from a DIAMETER client.6. However.g. Correlation of Accounting Records The Diameter protocol’s Session-Id AVP. In these cases. if a new record is being generated for the same session. each with their own unique Session-Id. the Acct-Multi-SessionId AVP is used for correlation. the AccountingRecord-Type AVP MUST be set to the value START_RECORD. et al. Services that do not require any authorization still use the Session-Id AVP to identify sessions. Accounting messages MAY use a different Session-Id from that sent in authorization messages. Furthermore. When the last Accounting-Request is sent. followed by zero or more INTERIM_RECORD and a single STOP_RECORD. there are certain applications where a user receives service from different access devices (e.RFC 3588 Diameter Based Protocol September 2003 Accounting-Request for a given session is sent. If the authorization server has directed interim accounting to be enabled. which is globally unique (see Section 8. the Diameter client MUST produce additional records between the START_RECORD and STOP_RECORD. marked INTERIM_RECORD. A particular Diameter application specification MUST define the type of sequences that MUST be used. The production of these records is directed by Acct-Interim-Interval as well as any re-authentication or re-authorization of the session. 9. the value MUST be STOP_RECORD. The Diameter client MUST overwrite any previous interim accounting records that are locally stored for delivery. correlation is performed using the Session-Id. a server that Calhoun.8). Standards Track [Page 118] . The one sequence that is sent MUST be either one record with Accounting-Record-Type AVP set to the value EVENT_RECORD. Mobile IPv4). is used during the authorization phase to identify a particular session. It is important to note that receiving a STOP_RECORD with no Accounting-Sub-Session-Id AVP when sub-sessions were originally used in the START_RECORD messages implies that all sub-sessions are terminated. During authorization. This ensures that only one pending interim record can exist on an access device for any given session. Specific applications MAY require different a Session-ID for accounting messages.. there are certain applications that require multiple accounting sub-sessions.] and so on until the session ends and a STOP_RECORD record is produced. et al. Calhoun. The selection of whether to use INTERIM_RECORD records is done by the Acct-Interim-Interval AVP. The client uses information in this AVP to decide how and when to produce accounting records. The inclusion of the AVP with Value field set to a non-zero value means that INTERIM_RECORD records MUST be produced between the START_RECORD and STOP_RECORD records. known as the client. STOP_RECORD 4 An Accounting Stop Record is sent to terminate an accounting session and contains cumulative accounting information relevant to the existing session. Further. 2. The Value field of this AVP is the nominal interval between these records in seconds. and STOP_RECORD are produced. 9. START_RECORD. based on the needs of the home-organization. With different values in this AVP. service sessions can result in one. The Diameter node that originates the accounting information. as appropriate for the service. two. MUST produce the first INTERIM_RECORD record roughly at the time when this nominal interval has elapsed from the START_RECORD. The omission of the Acct-Interim-Interval AVP or its inclusion with Value field set to 0 means that EVENT_RECORD. or two+N accounting records. additional interim record triggers MAY be defined by application-specific Diameter applications. Interim Accounting Records SHOULD be sent every time a re-authentication or re-authorization occurs. Acct-Interim-Interval The Acct-Interim-Interval AVP (AVP Code 85) is of type Unsigned32 and is sent from the Diameter home authorization server to the Diameter client.2.RFC 3588 Diameter Based Protocol September 2003 INTERIM_RECORD 3 An Interim Accounting Record contains cumulative accounting information for an existing accounting session. The following accounting record production behavior is directed by the inclusion of this AVP: 1. the next one again as the interval has elapsed once more. Standards Track [Page 122] .8. The client MUST ensure that the interim record production times are randomized so that large accounting message storms are not created either among records or around a common service start time. 8. Acct-Multi-Session-Id AVP The Acct-Multi-Session-Id AVP (AVP Code 50) is of type UTF8String. 9. a network problem. Accounting-Sub-Session-Id AVP The Accounting-Sub-Session-Id AVP (AVP Code 287) is of type Unsigned64 and contains the accounting sub-session identifier. 9.7. The absence of this AVP implies no sub-sessions are in use. As Session-Id AVPs are globally unique. where each session would have a unique Session-Id. and so on until the value for STOP_RECORD is one more than for the last INTERIM_RECORD. following the format specified in Section 8.8. Accounting-Realtime-Required AVP The Accounting-Realtime-Required AVP (AVP Code 483) is of type Enumerated and is sent from the Diameter home authorization server to the Diameter client or in the Accounting-Answer from the accounting server.8. 2 for the second. and the value of this AVP MUST be monotonically increased by one for all new sub-sessions. Standards Track [Page 123] .3. and MUST be used in all accounting messages for the given session. et al.4. This AVP MAY be returned by the Diameter server in an authorization answer. The combination of the Session-Id and this AVP MUST be unique per subsession. the combination of Session-Id and AccountingRecord-Number AVPs is also globally unique. Acct-Session-Id AVP The Acct-Session-Id AVP (AVP Code 44) is of type OctetString is only used when RADIUS/Diameter translation occurs.RFC 3588 Diameter Based Protocol September 2003 9. Calhoun. for instance. 9. Accounting-Record-Number AVP The Accounting-Record-Number AVP (AVP Code 485) is of type Unsigned32 and identifies this record within one session. An easy way to produce unique numbers is to set the value to 0 for records of type EVENT_RECORD and START_RECORD. This AVP contains the contents of the RADIUS Acct-Session-Id attribute.5.8. The client uses information in this AVP to decide what to do if the sending of accounting records to the accounting server has been temporarily prevented due to.8. The Acct-MultiSession-Id AVP is used to link together multiple related accounting sessions. and can be used in matching accounting records with confirmations. with the exception of an Accounting-Request whose Accounting-Record-Type is set to STOP_RECORD. A STOP_RECORD message with no Accounting-Sub-Session-Id AVP present will signal the termination of all sub-sessions for a given Session-Id.6. but the same Acct-Multi-Session-Id AVP. and set the value to 1 for the first INTERIM_RECORD.8. 9. GRANT_AND_STORE 2 The AVP with Value field set to GRANT_AND_STORE means that service SHOULD be granted if there is a connection.4. It is considered an error if there are more than one instance of the AVP. At least one instance of the AVP MUST be present in the message. 10. Calhoun. GRANT_AND_LOSE 3 The AVP with Value field set to GRANT_AND_LOSE means that service SHOULD be granted even if the records can not be delivered or stored. One instance of the AVP MUST be present in the message. Base Protocol Command AVP Table 1 1+ 10. or MAY NOT be present. Note that the set of alternative accounting servers are treated as one server in this sense.RFC 3588 Diameter Based Protocol September 2003 DELIVER_AND_GRANT 1 The AVP with Value field set to DELIVER_AND_GRANT means that the service MUST only be granted as long as there is a connection to an accounting server. or as long as records can still be stored as described in Section 9. Having to move the accounting record stream to a backup server is not a reason to discontinue the service to the user. AVP Occurrence Table The following tables presents the AVPs defined in this document. This is the default behavior if the AVP isn’t included in the reply from the authorization server. The table uses the following symbols: 0 0+ 0-1 The AVP MUST NOT be present in the message.1. Zero or one instance of the AVP MAY be present in the message. and specifies in which Diameter messages they MAY. Zero or more instances of the AVP MAY be present in the message. et al. Note that AVPs that can only be present within a Grouped AVP are not represented in this table. Standards Track [Page 124] . The table in this section is limited to the non-accounting Command Codes defined in this specification. . et al. which may be expanded.2. +-----------+ |. These AVP occurrence requirements are guidelines. Accounting AVP Table The table in this section is used to represent which AVPs defined in this document are to be present in the Accounting messages. Standards Track [Page 126] . et al.RFC 3588 Diameter Based Protocol September 2003 Vendor-Specific|0+ |0+ |0 |0 |0 |0 |0 |0 |0 |0 |0 |0 | Application-Id | | | | | | | | | | | | | --------------------+---+---+---+---+---+---+---+---+---+---+---+---+ 10. and/or overridden by application-specific requirements in the Diameter applications documents. a successor designated by the Area Director) for comment and review. The AVP Codes and sometimes also possible values in an AVP are controlled and maintained by IANA. Before a period of 30 days has passed. A denial notice must be justified by an explanation and. Standards Track [Page 127] . AVP Codes The AVP Code namespace is used to identify attributes. the Designated Expert will either approve or deny the registration request and publish a notice of the decision to the AAA WG mailing list or its successor.1.RFC 3588 Diameter Based Protocol September 2003 11. in the cases where it is possible. and MUST include a pointer to a public specification. Diameter is not intended as a general purpose protocol. Calhoun. 11. The following policies are used here with the meanings defined in BCP 26: "Private Use". "IETF Consensus". "Expert Review".1. 11. AVP Header As defined in Section 4. the responsible IESG area director should appoint the Designated Expert. Vendor-ID and Flags field. concrete suggestions on how the request can be modified so as to become acceptable. "First Come First Served". "Specification Required".1. There are multiple namespaces. the request is posted to the AAA WG mailing list (or. in accordance with BCP 26 [IANA]. the AVP header contains three fields that requires IANA namespace management. if it has been disbanded. the AVP Code. For registration requests where a Designated Expert should be consulted. and allocations SHOULD NOT be made for purposes unrelated to authentication. This section explains the criteria to be used by the IANA for assignment of numbers within namespaces defined within this document. For Designated Expert with Specification Required. et al. The absence of a Vendor-ID or a Vendor-ID value of zero (0) identifies the IETF IANA controlled AVP Codes namespace. authorization or accounting. Vendors can have their own AVP Codes namespace which will be identified by their Vendor-ID (also known as Enterprise-Number) and they control the assignments of their vendorspecific AVP codes within their own namespace. IANA Considerations This section provides guidance to the Internet Assigned Numbers Authority (IANA) regarding registration of values related to the Diameter protocol. "Standards Action". 11.2. There are 8 bits in the AVP Flags field Section 4. defined in (’V’endor Specific). Command Code and Command Flags. Command Codes The Command Code namespace is used to identify Diameter commands. no guarantee is made for interoperability between Diameter peers using experimental commands.777. This document assigns bit 0 (’M’andatory) and bit 2 (’P’rotected). 287.1. AVPs may be allocated following Designated Expert with Specification Required [IANA].5 for the assignment of the namespace in this specification. 274-275.214 and 16. bit 1 The remaining bits should [IANA]. 280 and 282.1 for the assignment of the namespace in this specification. allocation of global AVPs should be encouraged instead. AVP Flags of the AVP header.215 (hexadecimal values 0xfffffe 0xffffff) are reserved for experimental commands. 480.777.2. The values 16. 258. See Section 3. AVP Codes 1-255 are managed separately as RADIUS Attribute Types [RADTYPE]. for functions specific only to one vendor’s implementation of Diameter. 291-300. This document defines the Command Codes 257. and are defined as "RADIUS Packet Type Codes" in [RADTYPE]. Note that Diameter defines a mechanism for Vendor-Specific AVPs. 271. Release of blocks of AVPs (more than 3 at a time for a given purpose) should require IETF Consensus. 483 and 485-486. Where a Vendor-Specific AVP is implemented by more than one vendor. See Section 4. only be assigned via a Standards Action 11. where no interoperability is deemed useful.2. et al. standard commands. Diameter Header As defined in Section 3. the Diameter header contains two fields that require IANA namespace management. Calhoun. 276-285.RFC 3588 Diameter Based Protocol September 2003 AVP Code 0 is not used. Values 25616. as outlined in [IANA-EXP]. Vendor-Specific AVPs codes are for Private Use and should be encouraged instead of allocation of global attribute types.1. Standards Track [Page 128] . As these codes are only for experimental and testing purposes. This document defines the AVP Codes 257-274. 11.213 are for permanent.777. where the Vendor-Id field in the AVP header is set to a non-zero value. allocated by IETF Consensus [IANA]. The values 0-255 are reserved for RADIUS backward compatibility. are for Private Use.2.RFC 3588 Diameter Based Protocol September 2003 11. Command Flags There are eight bits in the Command Flags field of the Diameter header. Vendor-Specific Application Identifiers. Standards Track [Page 129] . The following values are allocated.4. first-served basis. 3001-3010. Result-Code AVP Values As defined in Section 7. Calhoun. bit 2 (’E’rror) and bit 3 (’T’).4. All remaining values are available for assignment via IETF Consensus [IANA]. Application Identifiers As defined in Section 2. 11. This document assigns bit 0 (’R’equest).1. First Served basis by IANA.0xfffffffe for vendor specific applications. bit 1 (’P’roxy). First Served basis by IANA. adding additional values to the list can be done on a First Come. 11. Both Application-Id and Acct-Application-Id AVPs use the same Application Identifier space.4. AVP Values Certain AVPs in Diameter define a list of values with various meanings. Bits 4 through 7 MUST only be assigned via a Standards Action [IANA].2.1. There are standards-track application ids and vendor specific application ids. For attributes other than those specified in this section. IANA [IANA] has assigned the range 0x00000001 to 0x00ffffff for standards-track applications.3. Vendor-Specific Application Identifiers are assigned on a First Come. et al. 2001-2002. the Result-Code AVP (AVP Code 268) defines the values 1001. on a first-come. Diameter Common Messages NASREQ Mobile-IP Diameter Base Accounting Relay 0 1 [NASREQ] 2 [DIAMMIP] 3 0xffffffff Assignment of standards-track application IDs are by Designated Expert with Specification Required [IANA]. 4001-4002 and 5001-5017. the Application Identifier is used to identify a specific Diameter Application. 11. and 0x01000000 . 4. Session-Server-Failover AVP Values As defined in Section 8.4. Session-Binding AVP Values As defined in Section 8.4.4.3. 11.7. Accounting-Record-Type AVP Values As defined in Section 9.8. Auth-Request-Type AVP Values As defined in Section 8.17. All remaining values are available for assignment via IETF Consensus [IANA].4. All remaining values are available for assignment via IETF Consensus [IANA]. 11. the Session-Server-Failover AVP (AVP Code 271) defines the values 0-3. et al.5. the Session-Binding AVP (AVP Code 270) defines the bits 1-4. All remaining values are available for assignment via IETF Consensus [IANA].4.RFC 3588 Diameter Based Protocol September 2003 11. 11.13.4. Calhoun.15.1. All remaining values are available for assignment via IETF Consensus [IANA]. All remaining values are available for assignment via IETF Consensus [IANA]. Disconnect-Cause AVP Values As defined in Section 5. 11.8. Termination-Cause AVP Values As defined in Section 8.9.11.4. the Redirect-Host-Usage AVP (AVP Code 261) defines the values 0-5.4. All remaining values are available for assignment via IETF Consensus [IANA].3.2.4. All remaining bits are available for assignment via IETF Consensus [IANA]. the Auth-Session-State AVP (AVP Code 277) defines the values 0-1. the Auth-Request-Type AVP (AVP Code 274) defines the values 1-3. Standards Track [Page 130] . the Disconnect-Cause AVP (AVP Code 273) defines the values 0-2.7. 11. the Termination-Cause AVP (AVP Code 295) defines the values 1-8. All remaining values are available for assignment via IETF Consensus [IANA]. Redirect-Host-Usage AVP Values As defined in Section 6. the Accounting-Record-Type AVP (AVP Code 480) defines the values 1-4.6. 11. 11.18. Auth-Session-State AVP Values As defined in Section 8. 6.10. All remaining values are available for assignment via IETF Consensus [IANA].4. Accounting-Realtime-Required AVP Values As defined in Section 9."New Connectionless Transport Protocol (NCTP). The following values have been placed into the registry: Services Field AAA+D2T AAA+D2S 12.10.12.8. RFC 5766".12. along with reference to a document that describes the transport protocol.4. For example . Diameter TCP/SCTP Port Numbers The IANA has assigned TCP and SCTP port number 3868 to Diameter.5. This MUST include the name and acronym for the protocol. the Re-Auth-Request-Type AVP (AVP Code 285) defines the values 0-1. Name and Contact Information: The name. 11. address. 11. Re-Auth-Request-Type AVP Values As defined in Section 8. 11. All remaining values are available for assignment via IETF Consensus [IANA].7. Protocol TCP SCTP Diameter protocol related configurable parameters This section contains the configurable parameters that are found throughout this document: Calhoun. 11. email address and telephone number for the person performing the registration.11.RFC 3588 Diameter Based Protocol September 2003 11. the Inband-Security-Id AVP (AVP Code 299) defines the values 0-1. Protocol: The specific transport protocol associated with that service field.4. the Accounting-Realtime-Required AVP (AVP Code 483) defines the values 1-3. et al. An example for a new fictitious transport protocol called NCTP might be "AAA+D2N". All remaining values are available for assignment via IETF Consensus [IANA]. Inband-Security-Id AVP (code 299) As defined in Section 6. Standards Track [Page 131] . NAPTR Service Fields The registration in the RFC MUST include the following information: Service Field: The service field being registered. If a Diameter connection is not protected by IPsec. et al. after completion of the CER/CEA exchange. preshared keys can be used between the NAS and a local AAA proxy. It is suggested that IPsec be used primarily at the edges for intradomain exchanges. For protection of inter-domain exchanges. Calhoun. then the CER/CEA exchange MUST include an Inband-Security-ID AVP with a value of TLS. a TLS handshake will begin when both ends are in the open state. Realm Routing Table A Diameter proxy server routes messages based on the realm portion of a Network Access Identifier (NAI). and the address of the peer to which the message must be forwarded to. 13. Diameter servers MUST support TLS and IPsec. If the handshake fails. The server MUST have a table of Realm Names.RFC 3588 Diameter Based Protocol September 2003 Diameter Peer A Diameter entity MAY communicate with peers that are statically configured. which is typically used for all messages that cannot be locally processed. A statically configured Diameter peer would require that either the IP address or the fully qualified domain name (FQDN) be supplied. Diameter implementations MUST use transmission-level security of some kind (IPsec or TLS) on each connection.2 for more details on IPsec and TLS usage. end-to-end security is needed. TLS is recommended. This security mechanism is acceptable in environments where there is no untrusted third party agent.1 and 13. all further messages will be sent via TLS. See Sections 13. Diameter clients. In other situations. The routing table MAY also include a "default route". If the TLS handshake is successful. both ends move to the closed state. which would then be used to resolve through DNS. Tc timer The Tc timer controls the frequency that transport connection attempts are done to a peer with whom no active transport connection exists. Standards Track [Page 132] . The recommended value is 30 seconds. such as Network Access Servers (NASes) and Mobility Agents MUST support IP Security [SECARCH] and MAY support TLS [TLS]. Security Considerations The Diameter base protocol assumes that messages are secured by using either IPSec or TLS. For TLS usage. For NAS devices without certificate support. integrity protection and confidentiality. to bring up another IKE Phase 2 SA to protect it. The receipt of an IKE Phase 2 delete message SHOULD NOT be interpreted as a reason for tearing down a Diameter connection. This avoids the potential for continually bringing connections up and down. and IKE Main Mode SHOULD NOT be used.1. it is preferable to leave the connection up. When digital signatures are used for authentication. This allows the Phase 2 security association to correspond to specific TCP and SCTP connections. when used in conformant implementations. and key management. either IKE Main Mode or IKE Aggressive Mode MAY be used. Conformant implementations MUST support both IKE Main Mode and Aggressive Mode. Phase 2 delete messages may be sent for idle SAs. Rather. IKE Aggressive Mode SHOULD be used. IPsec Usage All Diameter implementations MUST support IPsec ESP [IPsec] in transport mode with non-null encryption and authentication algorithms to provide per-packet authentication.RFC 3588 Diameter Based Protocol September 2003 13. When pre-shared keys are used for authentication. The Phase 2 Quick Mode exchanges used to negotiate protection for Diameter connections MUST explicitly carry the Identity Payload fields (IDci and IDcr). IKE negotiators SHOULD use pertinent certificate revocation checks before accepting a PKI certificate for use in IKE’s authentication procedures. an IKE negotiator SHOULD use IKE Certificate Request Payload(s) to specify the certificate authority (or authorities) that are trusted in accordance with its local policy. using the IPsec DOI [IPSECDOI]. When digital signatures are used to achieve authentication. Standards Track [Page 133] .3 [IKE] SHOULD NOT be used. and MUST support the replay protection mechanisms of IPsec. Peer authentication using the public key encryption methods outlined in IKE’s Sections 5. Calhoun. However. Diameter implementations MUST support peer authentication using a pre-shared key. The DOI provides for several types of identification data.2 and 5. Diameter implementations MUST support IKE for peer authentication. each ID Payload MUST carry a single IP address and a single non-zero port number. Since IPsec acceleration hardware may only be able to handle a limited number of active IKE Phase 2 SAs. et al. and MAY support certificatebased peer authentication using digital signatures. and MUST NOT use the IP Subnet or IP Address Range formats. negotiation of security associations. and if additional traffic is sent on it. as a means of keeping the number of active Phase 2 SAs to a minimum. and a Diameter node that accepts a connection acts as a TLS server. As a result. Note that IPsec is considerably less flexible than TLS when it comes to configuring root CAs. In general. Standards Track [Page 134] . it is necessary to configure the root certificate authorities trusted by the Diameter peer. In order to ensure mutual authentication. Diameter nodes implementing TLS for security MUST mutually authenticate as part of TLS session establishment. and therefore peer discovery may be required. proper configuration of the trust model within a Diameter peer is essential to security.. When certificate authentication Diameter peers may not be known beforehand. the Diameter node acting as TLS server must request a certificate from the Diameter node acting as TLS client. TLS Usage A Diameter node that initiates a connection to another Diameter node acts as a TLS client according to [TLS]. within IPsec it is not possible to uniquely configure trusted root CAs for each application individually. et al. Since use of Port identifiers is prohibited within IKE Phase 1. Peer-to-Peer Considerations As with any peer-to-peer protocol. that a root CA trusted for use with Diameter must also be Calhoun.RFC 3588 Diameter Based Protocol September 2003 13. the same policy must be used for all applications. a Diameter peer will typically not be configured to allow connectivity with any arbitrary peer. This implies. for example. These root CAs are likely to be unique to Diameter usage and distinct from the root CAs that might be trusted for other purposes such as Web browsing. it is expected that those root CAs will be configured so as to reflect the business relationships between the organization hosting the Diameter peer and other organizations.2. and the Diameter node acting as TLS client MUST be prepared to supply a certificate on request. When certificates are used.3. 13. Calhoun. it would be necessary to plumb peer-specific policies either statically or dynamically. use of a simple static policy is the often the simplest route to IPsec-enabling a Diameter implementation. the policy would be "Require IPsec. TLS SHOULD be used to protect Diameter connections between administrative domains.g. Therefore. When pre-shared key authentication is used with IPsec to protect Diameter. a typical security policy for outbound traffic is "Initiate IPsec. a TLS-protected connection will match the IPsec policy. One implication of the recommended policy is that if a node is using both TLS and IPsec. and to be required whenever an inbound Diameter connection occurs. it is necessary for the set of Diameter peers to be known beforehand. Since IPsec extensions are typically not available to the sockets API on most platforms. This policy causes IPsec to be used whenever a Diameter peer initiates a connection to another Diameter peer.. Since Diameter uses the same port for TLS and non-TLS usage. but not both. peer discovery is typically not necessary. Standards Track [Page 135] . destination port Diameter". and both IPsec and TLS will be used to protect the Diameter connection. This policy is attractive. Inconsistent use of security mechanisms can result in redundant security mechanisms being used (e. These restrictions can be awkward at best. IPsec is most appropriate for intra-domain usage when pre-shared keys are used as a security mechanism. et al. As a result. To avoid this. When IPsec is used with Diameter. without reserving an additional port for TLS usage. an IPsec SA is automatically created based on a simple static policy. unique pre-shared keys are configured with Diameter peers. for inbound traffic. since it does not require policy to be set for each peer or dynamically modified each time a new Diameter connection is created. Since TLS supports application-level granularity in certificate policy. from any to me. potential security vulnerabilities. where the recommended IPsec policy is put in place. from me to any. there is not a convenient way in which to use either TLS or IPsec. and IPsec policy functionality is implementation dependent. destination port Diameter".RFC 3588 Diameter Based Protocol September 2003 trusted to protect SNMP. TLS over IPsec) or worse. It is recommended that a Diameter peer implement the same security mechanism (IPsec or TLS) across all its peer-to-peer connections. who are identified by their IP address (Main Mode). The following is intended to provide some guidance on the issue. or possibly their FQDN (Aggressive Mode). 14. Wroclawski. Narten. March 2002. Benson. RFC 3232. Firoiu. Overell. Stiliadis. and H. References Normative References Aboba.. and L. and J. Davie.. "An Expedited Forwarding PHB". T. P. Reynolds. BCP 26. Esibov. Crocker. "IEEE Standard for Binary Floating-Point Arithmetic". J. This can be accomplished via use of inbound and outbound filter policy. "Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers". Wood. RFC 3539. August 1985. "Assured Forwarding PHB Group". K.1.. "Authentication.. Le Boudec. Bennet. January 2002. June 2003. Black. Heinanen. Standards Track [Page 136] . B. and D. A. June 1999.. et al. and J. and J. RFC 2782. Alvestrand. "Augmented BNF for Syntax Specifications: ABNF". "A DNS RR for specifying the location of services (DNS SRV)". Weiss. and P.RFC 3588 Diameter Based Protocol September 2003 If IPsec is used to secure Diameter peer-to-peer connections. J.. Authorization and Accounting (AAA) Transport Profile".. Gulbrandsen.. and D. November 1997. Baker. F. Vollbrecht.. S. Institute of Electrical and Electronics Engineers. J. Baker. K. and to initiate IPsec protection for outbound connections. L. Blunk. B. V. "Assigned Numbers: RFC 1700 is Replaced by an On-line Database". W. "Guidelines for Writing an IANA Considerations Section in RFCs". W.. J. March 1998. Charny. RFC 2284. RFC 2234.. IPsec policy SHOULD be set so as to require IPsec protection for inbound connections. ANSI/IEEE Standard 754-1985. "PPP Extensible Authentication Protocol (EAP)". D. RFC 3246. RFC 2434. Vixie. [AAATRANS] [ABNF] [ASSIGNNO] [DIFFSERV] [DIFFSERVAF] [DIFFSERVEF] [DNSSRV] [EAP] [FLOATPOINT] [IANA] Calhoun. 14. December 1998. February 2000. October 1998. Courtney. Blake.. S. Davari. RFC 2597. RFC 2474. Nichols. A.. F. C.iana. "Internet Protocol". and M. Paxson.. D. Shacham.. "RADIUS Types". Standards Track [Page 137] ." RFC 2915. July 1998. RFC 2373.org/assignments/radius-types Stewart. Hinden. Day. "Simple Network Time Protocol (SNTP) Version 4 for IPv4.RFC 3588 Diameter Based Protocol September 2003 [IANAADFAM] IANA.. T. "Service Location Protocol. J. Version 2". Guttman. and R. Pereira. November 1998... M. B. Beadles. "Stream Control Transmission Protocol".. I. November 1998. S. D.. September 2001. Monsour. Sharp. "Address Family Numbers". March 1997. RFC 2119.iana. RFC 2486.. RFC 2960. Kalla. IANA. D.iana. IPv6 and OSI".. L. R.. Rytina. R. J. K. Mealling. C. RFC 2030. "Key words for use in RFCs to Indicate Requirement Levels". October 1996. RFC 3173. E. "The Network Access Identifier". October 2000. et al. "The naming authority pointer (NAPTR) DNS resource record.. Mills. Daniel. and M... Morneault. Bradner. Xie. "The Internet Key Exchange (IKE)". September 1981.. "The Internet IP Security Domain of Interpretation for ISAKMP". and V. Taylor. BCP 14. Aboba. R. "Number assignment". R.. "IP Payload Compression Protocol (IPComp)". January 1999. Zhang. Perkins.. and M..org Harkins. [IANAWEB] [IKE] [IPComp] [IPSECDOI] [IPV4] [IPV6] [KEYWORDS] [NAI] [NAPTR] [RADTYPE] [SCTP] [SLP] [SNTP] Calhoun. RFC 2409. and S. "IP Version 6 Addressing Architecture". Piper. RFC 2407. Q. Carrel. A. September 2000. H. Veizades. Schwarzbauer. June 1999.org/assignments/address-family-numbers IANA. M.. Deering. and D. RFC 791. Thomas.. RFC 2165. STD 5. Postel. . T. H.. Campbell. RFC 2975. Baba.... Work in Progress. Shiino. RFC 2246. C. Sivalingham. H. Y.. T. B. Bulley. T. January 1981. Allen. A... P. [AAACMS] [AAAREQ] [ACCMGMT] [CDMA2000] [DIAMMIP] Calhoun. S.. Perkins. [TEMPLATE] [TLS] [TLSSCTP] [URI] [UTF8] 14. Aboba. and L. Perkins. B. RFC 2279. Jungmaier. RFC 793.. Zorn. Shiino. December 2002. H. "Diameter CMS Security Application". January 1999. Farrell.. G. Patil. "CDMA2000 Wireless Data Requirements for AAA". B. Chen. RFC 3141. E. Masinter.. Arkko. Informative References P... October 2000. and M... Glass. Manning. C. E.... Manning.. Dommety. B. and D... Ayaki... Jaques. "UTF-8. T. June 2001. Jacobs. W. "Transmission Control Protocol". C. Koo.RFC 3588 Diameter Based Protocol September 2003 [TCP] Postel. P. Calhoun. S. Lo. "Diameter Mobile IP Application". Berners-Lee.. Tuexen. J. Hirschman. S. "Criteria for Evaluating AAA Protocols for Network Access". Mitton. et al. Hameed. Guttman. Lim. S. a transformation format of ISO 10646". C. E. and A. Aboba. Munson. B. P. P. Lim.. R. E.. "The TLS Protocol Version 1.. Hsu.. R. Hiller. "Service Templates and Service: Schemes". S.. and C. A..... Standards Track [Page 138] . D. Rescorla. Hsu. Calhoun. Chen. M... Beadles. B. Perkins. and E. S. Sivalingham. RFC 3436.. "Uniform Resource Identifiers (URI): Generic Syntax". August 1998. Jaques. P. R.. M. Calhoun. Yergeau.. P. Calhoun..2. November 2000. McCann. Work in Progress.. P. "Introduction to Accounting Management".. F. Fielding. RFC 2609. Hiller. M. STD 7. X. McCann. January 1998. Hameed. T. Campbell. Walsh. Xu. Harrington. Hirschman. June 1999. Munson. G.0". Lipford. RFC 2396. Kempf. G. X.. T. S. Seki. Walsh. B. Dommety.. J.. Xu. Baba. S.. S. M.. Y. Dierks. RFC 2989... "Transport Layer Security over Stream Control Transmission Protocol".. and J. E. Work in Progress. Narten. June 1999. and P. RFC 3576. RFC 2865. Ding. and C. Standards Track [Page 139] . Calhoun. P. RFC 2194. RFC 2977. T. Hiller. RFC 1661. B. C. T. "Proxy Chaining and Policy Implementation in Roaming". Perkins. "Mobile IP Authentication.RFC 3588 Diameter Based Protocol September 2003 [DYNAUTH] Chiba. RFC 2869.. M. Alsop. Zorn. September 1997.. October 2000.. "Dynamic Authorization Extensions to Remote Authentication Dial In User Service (RADIUS)". Authorization. Mitton. Beadles. January 1999. Jacobs. W. S. J. and W. D. RFC 2607. Glass. W. and D. RFC 3344. Vollbrecht. [IANA-EXP] [MIPV4] [MIPREQ] [NASNG] [NASREQ] [NASCRIT] [PPP] [PROXYCHAIN] [RADACCT] [RADEXT] [RADIUS] [ROAMREV] [ROAMCRIT] Calhoun. and B. August 2002. Wang. Work in Progress. W. "RADIUS Accounting". "Assigning Experimental and Testing Numbers Considered Useful". Dommety. Perkins. Willens. RFC 3169. and Accounting Requirements". S. "IP Mobility Support for IPv4". D. Mitton. and J. RFC 2866. "RADIUS Extensions".. A. June 2000. Haag. B. Willats. Simpson. G. "Criteria for Evaluating Network Access Server Protocols".. J. "Network Access Server Requirements Next Generation (NASREQNG) NAS Model". M... Aboba. Aboba. Eklund. Aboba. RFC 2477.. RFC 2881. September 2001. Aboba. Rubens.. Simpson. J. Beadles. "Criteria for Evaluating Roaming Protocols". "Review of Roaming Implementations". and G. and M. Rigney. and W.. July 2000. S. STD 51. Rubens. "Diameter NASREQ Application". June 2000. Bulley.. "Remote Authentication Dial In User Service (RADIUS)". et al.. M.. Calhoun. C. July 1994. Mitton. Rigney.. July 2003. C. C. "The Point-to-Point Protocol (PPP)". B. J. Rigney. A. Lu. June 2000. Fox. Paul Funk and David Mitton were instrumental in getting the Peer State Machine correct. Mark Jones. RFC 1492. Kenneth Peirce. Sometimes Called TACACS". Lol Grant. et al. and our deep thanks go to them for their time. Bob Kopacz. Martin Julien. Haseeb Akhtar. RFC 2401. [TACACS] 15. William Bulley. Sumit Vakil. John R. and R. Jonathan Wood and Bernard Aboba provided invaluable assistance in working out transport issues. Victor Muslin. Finseth. C. Ryan Moats. Calhoun. "An Access Control Protocol. Atkinson. Jacques Caron provided many great comments as a result of a thorough review of the spec. Finally. November 1998.. July 1993. Allison Mankin. The authors would also like to acknowledge the following people for their contribution in the development of the Diameter protocol: Allan C. and similarly with Steven Bellovin in the security area. Rubens. Tony Johansson and Pankaj Patel for their participation in the pre-IETF Document Reading Party. "Security Architecture for the Internet Protocol". Mark Jones and Dave Spence. Acknowledgements The authors would like to thank Nenad Trifunovic. Standards Track [Page 140] . Text in this document was also provided by Paul Funk. Fredrik Johansson. Stephen Farrell. Nancy Greene. Paul Krumviede. Peter Heitman.RFC 3588 Diameter Based Protocol September 2003 [SECARCH] Kent. Vollbrecht and Jeff Weisberg. Ignacio Goyret. S. Pat Calhoun would like to thank Sun Microsystems since most of the effort put into this document was done while he was in their employ. Mark Eklund. Fergal Ladley. Daniel C. David Frascone. John Schnizlein. ) Name of submitter: "Erik Guttman" <Erik. template-url-syntax= url-path= .example. It would thus be difficult if not impossible for an attacker to advertise itself using SLPv2 and pose as a legitimate Diameter peer without proper preconfigured secrets or cryptographic keys. A Diameter client can request specific Diameter servers based on characteristics of the Diameter service desired (for example.com:1812. confidentiality as well as perform end-point authentication. Standards Track [Page 141] .transport=tcp supported-auth-applications= string L M # This attribute lists the Diameter applications supported by the # AAA implementation. Calhoun. Template text: -------------------------template begins here----------------------template-type=service:diameter template-version=0. as Diameter services are vital for network operation it is important to use SLPv2 authentication to prevent an attacker from modifying or eliminating service advertisements for legitimate Diameter servers. Example: ’aaa://aaa.9. . Additional applications may be defined in the future.Guttman@sun. an AAA server to use for accounting.RFC 3588 Diameter Based Protocol September 2003 Appendix A.0 template-description= The Diameter protocol is defined by RFC 3588. Diameter Service Template The following service template describes the attributes used by Diameter servers to advertise themselves. Still. # An updated service template will be created at that time. This simplifies the process of selecting an appropriate server to communicate with.com> Language of service template: en Security Considerations: Diameter clients and servers use various cryptographic mechanisms to protect communication integrity. The Diameter URL format is described in Section 2. et al. # . Diameter implementations support one or more applications. The applications currently defined are: # Application Name Defined by # -------------------------------------------------# NASREQ Diameter Network Access Server Application # MobileIP Diameter Mobile IP Application # # Notes: # . MobileIP supported-acct-applications= string L M # This attribute lists the Diameter applications supported by the # AAA implementation. Priority Weight Port Target IN SRV 0 1 5060 server1.example.. The applications currently defined are: # Application Name Defined by # -------------------------------------------------# NASREQ Diameter Network Access Server Application # MobileIP Diameter Mobile IP Application # # Notes: # . Standards Track [Page 142] .example._sctp. # An updated service template will be created at that time. though it MAY support other # transports._tcp.. and the following NAPTR records are returned: . Note that a compliant Diameter # implementation MUST support SCTP. SCTP will be used.ex.example.com. in that order. SCTP. The client performs a NAPTR query for that domain. order pref flags service IN NAPTR 50 50 "s" "AAA+D2S" _diameter. # . If the client supports over SCTP. Additional applications may be defined in the future.com regexp replacement "" 50 "s" "AAA+D2T" This indicates that the server supports SCTP. That lookup would return: .com 0 Calhoun.example.RFC 3588 Diameter Based Protocol September 2003 # NASREQ._sctp. consider a client that wishes to resolve aaa:ex.com. NAPTR Example As an example. et al. targeted to a host determined by an SRV lookup of _diameter. # NASREQ.com IN NAPTR 100 "" _aaa. Diameter implementations support one or more applications. too. and TCP.TCP -------------------------template ends here----------------------Appendix B.MobileIP supported-transports= string L M SCTP # This attribute lists the supported transports that the Diameter # implementation accepts.com IN SRV 2 5060 server2.. - - - The T flag is used as an indication of an application layer retransmission event.. Duplicate Detection As described in Section 9. the likelihood of duplication will vary according to the implementation.4. record to be sent. e. However.. (e. Duplicates can appear for various reasons: Failover to an alternate server. a client may not know whether it has already tried to send the accounting records in its nonvolatile memory before the reboot occurred. In other situations it may be necessary to perform real-time duplicate detection. accounting record duplicate detection is based on session identifiers. At that time records are likely to be sorted according to the included User-Name and duplicate elimination is easy in this case. Where close to real-time performance is required. due to a failover to an alternate peer. such as when credit limits are imposed or real-time fraud detection is desired. et al. Diameter servers MAY use the T flag as an aid when processing requests and detecting duplicate messages. Standards Track [Page 143] .g. after a reboot. Implementation problems and misconfiguration. Failover can occur at the client or within Diameter agents. due to failover to an alternate server. It is defined only for request messages sent by Diameter clients or agents. For instance. Since the retransmission behavior of RADIUS is not defined within [RFC2865].RFC 3588 Diameter Based Protocol September 2003 Appendix C. but prior to receipt of an application layer ACK and deletion of the record. Duplicates received from RADIUS gateways.g. Failure of a client or agent after sending of a record from nonvolatile memory. This will result in retransmission of the record soon after the client or agent has rebooted.. Calhoun. failover thresholds need to be kept low and this may lead to an increased likelihood of duplicates. the Diameter server can delay processing records with the T flag set until a time period TIME_WAIT + RECORD_PROCESSING_TIME has elapsed after the closing of the original transport connection. In a well run network. hashing techniques or other schemes. The likelihood of this occurring increases as the failover interval is decreased. A Diameter server MAY check the T flag of the received message to determine if the record is a possible duplicate. if sent. So only records within this time window need to be looked at in the backward direction. For example. After this time period has expired. Since the T flag does not affect interoperability. If the T flag is set in the request message. perhaps a day. the server searches for a duplicate within a configurable duplication time window backward and forward. have been received and recorded. in order to optimize duplicate detection. Since the Diameter server is responsible for duplicate detection. In order to be able to detect out of order duplicates. only generation of duplicates due to failover or resending of records in non-volatile storage can be reliably detected by Diameter clients or agents. in order to allow time for the original record to exit the network and be recorded by the accounting server.RFC 3588 Diameter Based Protocol September 2003 In general. et al. network partitions and device faults will presumably be rare events. then it may check the T flag marked records against the database with relative assurance that the original records. and may not be needed by some servers. The following is an example of how the T flag may be used by the server to detect duplicate requests. it can choose to make use of the T flag or not. generation of the T flag is REQUIRED for Diameter clients and agents. so this approach represents a substantial optimization of the duplicate detection process. it is possible for the original record to be received after the T flag marked record. may be used to eliminate the need to do a full search even in this set except for rare cases. it can be usually be assumed that duplicates appear within a time window of longest recorded network partition or device fault. Secondly. In such cases the Diameter client or agents can mark the message as possible duplicate by setting the T flag. but MAY be implemented by Diameter servers. such as the use of the T flag in the received messages. the Diameter server should use backward and forward time windows when performing duplicate checking for the T flag marked request. due to differences in network delays experienced along the path by the original and duplicate transmissions. As an example. This limits database searching to those records where the T flag is set. During failover. Standards Track [Page 144] . Calhoun. Please address the information to the IETF Executive Director. Information on the IETF’s procedures with respect to rights in standards-track and standards-related documentation can be found in BCP-11. patents or patent applications. et al. neither does it represent that it has made any effort to identify any such rights..RFC 3588 Diameter Based Protocol September 2003 Appendix D. or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF Secretariat. Calhoun. Standards Track [Page 145] . The IETF invites any interested party to bring to its attention any copyrights. or other proprietary rights which may cover technology that may be required to practice this standard. Copies of claims of rights made available for publication and any assurances of licenses to be made available. com Jari Arkko Ericsson 02420 Jorvas Finland Phone: +358 40 5079256 EMail: Jari. 110 Nortech Parkway San Jose. Suite 500 Bellevue.RFC 3588 Diameter Based Protocol September 2003 Authors’ Addresses Pat R. Calhoun Airespace. WA 98004 USA Phone: +1 425 438 8218 Calhoun. 7 74915 Waibstadt Germany Phone: EMail: +49 7263 911 701 erik.guttman@sun.. Eichhoelzelstr. California. 95134 USA Phone: +1 408-635-2023 Fax: +1 408-635-2020 EMail: pcalhoun@airespace.Loughney@nokia. Inc. Inc. et al. Standards Track [Page 146] .com John Loughney Nokia Research Center Itamerenkatu 11-13 00180 Helsinki Finland Phone: EMail: +358 50 483 6242 john.E.com Erik Guttman Sun Microsystems.Arkko@ericsson.com Glen Zorn Cisco Systems. 500 108th Avenue N. Inc. RFC 3588 Diameter Based Protocol September 2003 Full Copyright Statement Copyright (C) The Internet Society (2003). EXPRESS OR IMPLIED. published and distributed. this document itself may not be modified in any way. such as by removing the copyright notice or references to the Internet Society or other Internet organizations. Calhoun. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. provided that the above copyright notice and this paragraph are included on all such copies and derivative works. without restriction of any kind. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES. INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. All Rights Reserved. in whole or in part. et al. or as required to translate it into languages other than English. copied. except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed. and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared. Standards Track [Page 147] . However. This document and translations of it may be copied and furnished to others. Acknowledgement Funding for the RFC Editor function is currently provided by the Internet Society. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue listening from where you left off, or restart the preview.
https://www.scribd.com/doc/123205175/RFC-3588-Diameter-Base-Protocol
CC-MAIN-2016-36
refinedweb
38,378
51.85
I have some problems with a class in my program, I have a header file and a .cxx file which I can change that actually contain my class, and a .cxx file that has my main which I cannot touch. Here is my code, help if you can. Code:#include <iostream> #include "money.h" using namespace std; // this file should not change just because you change the money class int main() { money mine, yours(123.45); cout << "mine is " << mine << "\nYours is " << yours; cout << "\nEnter a double "; cin >> mine; cout << "\nEnter a double "; cin >> yours; cout << "mine is " << mine << "\nYours is " << yours; if ( mine < yours ) cout << "\n\nmine " << mine << " is less than " << yours << " yours\n"; else cout << "\n\nmine " << mine << " is >= to " << yours << " yours\n"; return (0); }Code:#include "money.h" money::money() : Dollars(0), Cents(0) {} money::money(long in) { Dollars = (long) (in * 100); } money::money(int in1) { Cents = (int) (in1*1); } ostream& operator <<(ostream& p1, const money& p2) { p1 << "$" << (p2.Dollars / 100) << "." << (p2.Cents / 100) << "." << p2.Dollars % 100 << "." <<p2.Cents; return p1; } // overloaded output istream& operator >>(istream& p1, money &p2) { long in; int in1; p1 >> in; p1 >> in1; p2.Dollars = (long) (in * 100); p2.Cents = (long) (in1*.1); return p1; } // overloaded input bool operator <(money p1, money p2) { return( (p1.Dollars + p1.Cents) < (p2.Dollars + p2.Cents) ); }Code:#ifndef MONEYH #define MONEYH #include <iostream> using namespace std; class money { public: money(); money(long in, int in1); friend ostream& operator <<(ostream &p1, const money& p2); friend istream& operator >>(istream& p1, money &p2); friend bool operator <(money p1, money p2); // true p1 < p2 private: long Dollars; int Cents; }; // class money #endif // MONEYH
http://cboard.cprogramming.com/cplusplus-programming/49577-class-help.html
CC-MAIN-2014-15
refinedweb
272
61.16
Error Handling Strategies Error Handling Strategies Learn about the four main error handling strategies- try/catch, explicit returns, either, and supervising crashes- and how they work in various languages. Join the DZone community and get the full member experience.Join For Free Maintain Application Performance with real-time monitoring and instrumentation for any application. Learn More! Introduction. Contents When programs crash, log/print/trace statements and errors aren't always helpful to quickly know what went wrong. Logs, if there are at all, tell a story you have to decipher. Errors, if there, sometimes give you a long stack trace, which often isn't long enough. Asynchronous code can make this harder. Worse, both are often completely unrelated to the root cause, or lie and point you in the completely wrong debugging direction. Overall, most crashes aren't always helpful in debugging why your code broke. Various error handling strategies can prevent these problems. The original way to implement an error handling strategy is to throw your own errors. // a type example validNumber = n => _.isNumber(n) && _.isNaN(n) === false; add = (a, b) => { if (validNumber(a) === false) { throw new Error(`a is an invalid number, you sent: ${a}`); } if (validNumber(b) === false) { throw new Error(`b is an invalid number, you sent: ${b}`); } return a + b; }; add('cow', new Date()); // throws They have helpful and negative effects that take pages of text to explain. The reason you shouldn't use them is they can crash your program. While often this is often intentional by the developer, you could negatively affect things outside your code base like a user's data, logs, and this often trickles down to user experience. It also makes it more difficult for the developer debugging to pinpoint exactly where it failed and why. I jokingly call them explosions because they accidentally can affect completely unrelated parts of the code when they go off. Compilers and runtime errors in sync and async code still haven't gotten good enough (except maybe Elm) to help us immediately diagnose what we, or someone else, did wrong. We don't want to crash a piece of code or entire programs. We want to correctly identify what went wrong, give enough information for the developer to debug and/or react, and ensure that error is testable. Intentional developer throws attempt to tell you what went wrong and where, but they don't play nice together, often mask the real issue in cascading failures, and while sometimes testable in isolation, they are harder to test in larger composed programs. The second option is what Go, Lua, and sometimes Elixir do, where you handle the possible error on a per function basis. They return information if the function worked or not along with the regular return value. Basically they return 2 values instead of 1. These are different for asynchronous calls per language so let's focus on synchronous for now. Various Language Examples of Explicit Returns Lua functions will throw errors just like Python and JavaScript. However, using a function called protected call, pcall it will capture the exception as part of a 2nd return value: function datBoom() error({ reason = 'kapow' }) end ok, error = pcall(datBoom) print("did it work?", ok, "error reason:", error.reason) --did it work ? false, error : kapow Go has this functionality natively built in: func datBoom()(err error) ok, err: = datBoom() if err != nil { log.Fatal(err) } ... and so does Elixir (with the ability to opt out using a ! at the end of a function invocation): def datBoom do {:error, "kapow"} end {:error, reason} = datBoom() IO.puts "Error: #{reason}" ## kapow While Python and JavaScript do not have these capabilities built into the language, you can easily add them. Python can do the same using tuples: def datBoom(): return (False, 'kapow') ok, error = datBoom() print("ok:", ok, "error:", error) # ('ok:', False, 'error:', 'kapow') JavaScript can do the same using Object destructuring: const datBoom = () => ({ ok: false, error: 'kapow' }); const { ok, error } = datBoom(); console.log("ok:", ok, "error:", error); // ok: false error: kapow Effects on Coding This causes a couple of interesting things to happen. First, developers are forced to handle errors when and where they occur. In the throw scenario, you run a lot of code, and sprinkle throws where you think it'll break. Here, even if the functions aren't pure, every single one could possibly fail. There is no point continuing to the next line of code because you already failed at the point of running the function and seeing it failed ( ok is false) an error was returned telling you why. You start to really think how to architect things differently. Second, you know WHERE the error occurred (mostly). The "why" is still always up for debate. Third, and most important, if the functions are pure, they become easier to unit test. Instead of "I get my data, else it possibly blows up", it immediately tells you: "I worked, and here's your data", or "I broke, and here's what could be why". Fourth, these errors DO NOT (in most cases if your functions are pure) negatively affect the rest of the application. Instead of a throw which could take other functions down with it, you're not throwing, you're simply returning a different value from a function call. The function "worked" and reported its "results". You're not crashing applications just because a function didn't work. Cons on Explicit Returns Excluding language specifics (i.e. Go panics, JavaScript's async/await), you have to look in 2 to 3 places to see what went wrong. It's one of the arguments against Node callbacks. People say not to use throw for control flow, yet all you've done is create a dependable ok variable. A positive step for sure, but still not a hugely helpful leap. Errors, if detected to be there, mold your code's flow. For example, let's attempt to parse some JSON in JavaScript. You'll see the absence of a try/catch replaced with an if(ok === false): const parseJSON = string => { try { const data = JSON.parse(string); return { ok: true, data }; } catch (error) { return { ok: false, error }; } }; const { ok, error, data } = parseJSON(new Date()); if (ok === false) { console.log("failed:", error); } else { console.log("worked:", data); } The Either Type Functions that can return 2 types of values are solved in functional programming by using the Either type, aka a disjoint union. Typescript (strongly typed language & compiler for JavaScript) supports a psuedo Either as an Algebraic Data Type (aka ADT). For example, this TypeScript getPerson function will return Error or Person and your compiler helps you with that: // Notice TypeScript allows you to say 2 possible return values function getPerson(): Error | Person The getPerson will return either Error, or Person, but never both. However, we'll assume, regardless of language, you're concerned with runtime, not compile time. You could be an API developer dealing with JSON from some unknown source, or a front end engineer dealing with user input. In functional programming, they have the concept of a "left or right" in an Either type, or an object depending on your language of choice. The convention is "Right is Correct" and "Left is Incorrect" (Right is right, Left is wrong). Many languages already support this in one form or another: JavaScript through Promises as values: .then is right, .catch is left) and Python via deferred values via the Twisted networking engine: addCallback is right, addErrback is left. Either Examples You can do this using a class or object in Python and JavaScript. We've already shown you the Object version above using {ok: true, data} for the right, and {ok: false, error} for the left. Here's a JavaScript Object Oriented example: class Either { constructor(right = undefined, left = undefined) { this._right = right; this._left = left; } isLeft() { return this.left !== undefined; } isRight() { return this.right !== undefined; } get left() { return this._left; } get right() { return this._right; } } const datBoom = () => new Either(undefined, new Error('kapow')); const result = datBoom(); if (result.isLeft()) { console.log("error:", result.left); } else { console.log("data:", result.right); } ... but you can probably already see how a Promise is a much better data type (despite it implying async). It's an immutable value, and the methods then and catch are already natively there for you. Also, no matter how many then's or "rights", 1 left can mess up the whole bunch, and it allllll flows down the single catch function for you. This is where composing Eithers (Promises in this case) is so powerful and helpful. const datBoom = () => Promise.reject('kapow'); const result = datBoom(); result.then(data => console.log("data:", data)).catch(error => console.log("error:", error)); Pattern Matching Whether synchronous or not, though, there's a more powerful way to match the Either 'esque types through pattern matching. If you're an OOP developer, think of replacing your: if ( thingA instanceof ClassA ) { with: ClassA: ()=> "it's ClassA",. ClassB: ()=> "it's ClassB" It's like a switch and case for types. Elixir does it with almost all of their functions (the _ being the traditional default keyword): case datBoom do {:ok, data} -> IO.puts "Success: #{data}" {:error, reason} -> IO.puts "Error: #{reason}" _ -> IO.puts "No clue, brah..." end In JavaScript, you can use the Folktale library. const datBoom = () => Result.Error('kapow'); const result = datBoom(); const weGood = result.matchWith({ Error: ({ value }) => "negative...", Ok: ({ value }) => "OH YEAH!" }); console.log("weGood:", weGood); // negative... Python has pattern matching with Hask (although it's dead project, Coconut is an alternative): def datBoom(): return Left('kapow') def weGood(value): return ~(caseof(value) | m(Left(m.n)) >> "negative..." | m(Right(m.n)) >> "OH YEAH!") result = datBoom() print("weGood:", weGood(result)) # negative... Scala does it as well, looking more like a traditional switch statement: def weGood(value: Either): String = value match { case Left => "negative..." case Right => "OH YEAH!" case _ => "no clue, brah..." } weGood(Left('kapow')) // negative... The Mathematicians came up with Either. Three cool cats at Ericsson in 1986 came up with a different strategy in Erlang: let it crash. Later in 2009, Akka took the same idea for the Java Virtual Machine in Scala and Java. This flies in the face of the overall narrative of this article: don't intentionally cause crashes. Technically it's a supervised crash. The Erlang / Akka developers know errors are a part of life, so embrace they will happen, and give you a safe environment to react to them happening without bringing down the rest of your application. It also only becomes relatable if you do the kind of work where uptime with lots of traffic is the number one goal. Erlang (or Elixir) create processes to manage your code. If you know Redux or Elm, the concept of a store to keep your (mostly) immutable data, then you'll understand the concept of a Process in Elixir, and an Actor in Akka. You create a process, and it runs your code. Except, the framework developers knew that if you find a bug, you'll fix it and upload new code to the server. If the server needs to keep running to serve a lot of customers, then it needs to immediately restart if something crashes. If you upload new code, it needs to restart your new code as the older code processes shut down when they are done (or crash). So, they created supervisors Elixir| Scala. Instead of creating 1 process that runs your code, it creates 2: one to run your code, and another to supervise it if it crashes, to restart a new one. These processes are uber lightweight (0.5kb of memory in Elixir, 0.3kb in Akka). While Elixir has support for try, catch, and raise, error handling in Erlang/Elixir is a code smell. Let it crash, the supervisor will restart the process, you can debug the code, upload new code to a running server, and the processes spawned from that point forward will use your new code. This is similar to the immutable infrastructure movement around Docker in Amazon's EC2 Container Service and Kubernetes. Intentionally crashing programs is a bad programming practice. Using throw is not the most effective way to isolate program problems, they aren't easy to test, and can break other unrelated things. Next time you think of using throw, instead, try doing an explicit return or an Either. Then unit test it. Make it return an error in a larger program and see if it's easier for you to find it given you are the one who caused it. I think you'll find explicit returns or Eithers are easier to debug, easier to unit test, and can lead to better thought out applications. Collect, analyze, and visualize performance data from mobile to mainframe with AutoPilot APM. Learn More! Published at DZone with permission of Jesse Warden , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/error-handling-strategies
CC-MAIN-2018-22
refinedweb
2,174
64.51
Super Event Links Introduction The Super-Event is an option implemented in the Frontend code in order to reduce the amount of data to be transferred to the back-end computer(s) by removing the bank header for each event constructed. It is not applicable to FIXED Format events. In other words, when an equipment readout in MIDAS Format is complete, the event is composed of the bank header followed by the data section. The overhead in bytes of the bank structure is 16 bytes for bk_init(), 20 bytes for bk_init32(). If the data section size is close to the number above, the data transfer as well as the data storage has an non-negligible overhead. To address this problem, the equipment can be set up to generate a so called Super-Event which is an event composed of the initial standard bank header for the first event of the super-event, and up to the number of sub-events maximum successive data sections before the closing of the bank. Example frontend code To demonstrate the use of the super-event, consider the following example of the Equipment Declaration in the frontend code: // Define equipment to be able to generate the Super-Event // { "GE", // equipment name 2, 0x0002, // event ID, trigger mask "SYSTEM", // event buffer #ifdef USE_INT EQ_INTERRUPT, // equipment type #else EQ_POLLED, // equipment type #endif LAM_SOURCE(GE_C, LAM_STATION(GE_N)), // interrupt source "MIDAS", // format TRUE, // enabled RO_RUNNING, // read only when running 200, // poll for 200ms 0, // stop run after this event limit 1000, // number of sub events > 0 ... enables Super-event 0, // don't log history "", "", "", read_ge_event, // readout routine , ... Example Readout code Set up the readout function for Super-Event collection, e.g. //-- Event readout // Global and fixed -- Expect NWORDS 16bits data readout per sub-event #define NWORDS 3 // INT read_ge_event(char *pevent, INT offset) { static WORD *pdata; // // Super-event structure if (offset == 0) { // FIRST event of the Super-event bk_init(pevent); bk_create(pevent, "GERM", TID_WORD, &pdata); // else if (offset == -1) { // close the Super-event if offset is -1 bk_close(pevent, pdata); // // End of Super-Event return bk_size(pevent); // // // read GE sub-event (ADC) cam16i(GE_C, GE_N, 0, GE_READ, pdata++); cam16i(GE_C, GE_N, 1, GE_READ, pdata++); cam16i(GE_C, GE_N, 2, GE_READ, pdata++); // // clear hardware re_arm_ge(); // if (offset == 0) { // Compute the proper event length on the FIRST event in the Super-Event // NWORDS correspond to the !! NWORDS WORD above !! // sizeof(BANK_HEADER) + sizeof(BANK) will make the 16 bytes header // sizeof(WORD) is defined by the TID_WORD in bk_create() // return NWORDS * sizeof(WORD) + sizeof(BANK_HEADER) + sizeof(BANK); // else // Return the data section size only // sizeof(WORD) is defined by the TID_WORD in bk_create() // return NWORDS * sizeof(WORD); Discussion As shown in the example above: - For the first event, the correct size of the event, including the header, must be calculated and returned - Subsequent events return the size of the data only, excluding the header. The input parameter "offset" is used to indicate whether the event is the first, last or intermediate. After the last event, the bank is closed. The encoding of the data section is left to the user. If the number of words per sub-event is fixed (i.e. NWORDS in the above example), the sub-event extraction by an analyzer is simple. In the case of variable sub-event length, it is necessary to tag the first or the last word of each sub-event.The contents of the sub-event is the choice of the user. - Note - Since no particular tagging is applied to the Super-Event by the Midas transfer mechanism, the user must provide code in the backend analyzer to interpret the contents of the Super-Event bank(s). - If the Super-Event is composed by an equipment on a remote processor running a different Endian mode than the backend processor, it would be necessary to ensure the data type consistency throughout the Super-Event in order to guarantee the proper byte-swapping of the data content. Byte Swap Macros are available for this purpose. - It may be convenient to change the time-stamp of the super-event using the TIME_STAMP Macro. - The ODB key /Equipment/<equipment name>/statistics/event rate event rate will indicate the rate of sub-events.
https://daq00.triumf.ca/MidasWiki/index.php/Super_Event
CC-MAIN-2022-27
refinedweb
704
52.53
Created on 2007-03-19 03:32 by blakeross, last changed 2009-04-20 10:35 by KayEss.); } """ I'll try to explain why I did it this way. I was considering the single inheritance case implementing an Immutable object, which overrides __new__ but has no need to override __init__ (since it's too late to do anything in __init__ for an Immutable object). Since the __init__ still gets called it would be annoying to have to override it just to make the error go away if there was a check in __init__. The other case is overriding __init__ without overriding __new__, which is the most common way of doing Mutable objects; here you wouldn't want __new__ to complain about extra args. So the only time when you'd want complaints is if both __new__ and __init__ are the defaults, in which case it doesn't really matter whether you implement this in __init__ or in __new__, so I arbitrarily chose __new__. I wasn't thinking of your use case at the time though (cooperative super calls to __init__, which still isn't something I engage in on a day-to-day basis). I wonder if the right thing to do wouldn't be to implement the same check both in __init__ and in __new__. Am I makign sense? Makes sense. I don't think we can ever be completely correct here since we're inferring intent from the presence of __init__/__new__ that's liable to be wrong in some cases, but it's likely correct often enough that it's worth doing. If I understand correctly, we want to be more forgiving iff one of the two methods is used, so it seems like we should be complaining if both are used *or* if neither is used. After all, I could add a __new__ to my coop use case and I'd still want object to complain. If that's the case, both object_new and object_init should be complaining if ((tp->tp_new == object_new && tp->tp_init == object_init) || (tp->tp_new != object_new && tp->tp_init != object_init)). Of course, for the paranoid, there's always the risk that __new__ will modify these class functions and change the outcome :) For instance, if a class had a __new__ and no __init__ and its __new__ changed __new__ back to object.__new__, object_init on that run would be fooled into thinking it's using the defaults for both and would complain. I think this could only be fixed in type_call, which is rather ugly...but then, this *is* a special case of the "call __init__ after __new__" behavior, and we're trying to solve it within the methods themselves. Perhaps this last point is academic enough to be ignored...I don't know why anyone would do this, although the language makes it possible. Attached is a patch that implements this proposal, adding copious commentary. It doesn't seem to break anything in the test suite. I wonder if we should even make the check more rigid: check the argument list if either the current method *is* overridden or the other one *is not* overridden. This would make super calls check the arguments even if the other method is overridden. What do you think? File Added: new_init.patch This smells enough like a new feature that it couldn't go into 2.5. I think making the check more rigid is a good idea, since this should throw: class a(object): def __init__(self, foo): super(a, self).__init__(foo) def __new__(cls, foo): return object.__new__(cls) a(1) (minor typo in the patch: "solution it" -> "solution is") Here's a stricter version. Unfortunately it breaks a couple of standard modules; this is a confirmation of my doubts whether the style of cooperative super calling of __init__ that you use is really the most common or "best practice". So far I have only fixed string.py (which would otherwise prevent extensions from being built); I haven't looked into why the other tests fail: test_array, test_cpickle, test_descr, test_pickle (and maybe more?). My conclusion: this would probably break too much code to be worth it. So I'll have to revert to the previous version. But anyway, here it is for your perusal. File Added: new_init_strict.patch I should mention that if we can't get the strict version of this in 2.6, we should be able to get it into 3.0. Looks good. I skimmed briefly the tests you mentioned. The issue with test_array appears to be exactly the kind of bug this is intended to identify: it calls array.__init__(...), but array doesn't have its own initializer, so object's is used. I'd guess that the others are failing due to whatever the problem with pickling is (test_descr uses pickling). I haven't looked into that yet. I'm sure cooperative super calling of __init__ isn't all that common (it seems like the mechanism itself isn't used much yet, and may not be until it's done via keyword) but it doesn't seem like such a bad practice, especially when mixins are in the picture. There doesn't seem to be a great alternative. Well, but since it's been like this for a long time, I don't want to gratuitously break code. At least not in 2.6. So I'm rejecting the stricter patch for 2.6. (However, if you want to submit patches that would fix these breakages anyway, be my guest.) Holding the strict version for 3 makes sense to me. Let me know if you need anything more on my end... thanks for the fast turnaround. I ask myself, what should I expect from the documentation... >>> object.__init__.__doc__ 'x.__init__(...) initializes x; see x.__class__.__doc__ for signature' >>> object.__class__ <type 'type'> >>> type.__doc__ "type(object) -> the object's type\ntype(name, bases, dict) -> a new type" and I still don't know ;-). I think the avoidance of super() is largely *because* of this kind of leniency. Consider this snippet on python 2.3: class Base(object): def __init__(self, foo=None, *args, **kwargs): super(Base, self).__init__(foo, *args, **kwargs) Base(foo='bar') Base('bar', 42) Base('bar', 42, x=7) All pass silently, no error checking. Now consider a python 3.0 version, with a strict object.__init__: class Base: def __init__(self, foo=None, *, y='hi', **kwargs): super(Base, self).__init__(**kwargs) Base(foo='bar') # Valid, accepted Base('bar', 42) # Raises exception Base('bar', x=7) # Raises exception The error checking added by this bug/patch and the error checking added by PEP 3102 (keyword-only arguments) make super a lot more sane. I think it would also help if calling a method via super() didn't allow positional arguments. If the base class's arguments can't be given as keyword args then you probably should call it explicitly, rather than relying on super()'s MRO. I was also going to suggest super() should automagically create empty methods your parent classes don't have, but then I realized you really should have a base class that asserts the lack of such an automagic method: class Base(object): def mymethod(self, myonlyarg='hello world'): assert not hasattr(super(Base, self), 'mymethod') By the time you reach this base class you will have stripped off any extra arguments that your subclasses added, leaving you with nothing to pass up (and nothing to pass to). Having two mixins with the same method name and without a common parent class is just not sane. >. The vast majority of "positional" arguments can also be given via name. The rare exceptions (primarily C functions) may not cooperate well anyway, so you're trading a relatively obscure limitation for better error detection. Perhaps not that important though, since it could be taught as bad style unless absolutely needed. >> understand the desire for it to be an exception, I fail to see how it actually is one. The namespace/signature conflicts exist just the same. The only way I can see to handle incompatible signatures is to add a flag that says "I am the *ONLY* class allowed to subclass X" (triggering an error if violated), have super() entirely bypass it, and then call X.__init__() directly. Even that doesn't handle X's superclasses being subclassed more than once, and it looks pretty complicated/obscure anyway. Committed revision 54539. The committed version issues warnings rather than errors when both methods are overridden, to avoid too much breakage. The string.py change was necessary to avoid spurious warnings (with no module/lineno!) and breakage of test_subprocess.py. Something fishy's going on -- is string.Template() used by the warnings module or by site.py??? I'm leaving this bug open but changing the category to Py3k so remind me it needs to be merged and then changed there. FWIW, this change will be somewhat pervasive and will affect anything inheriting object.__init__ including immutable builtins (like tuple, float, and frozenset) as well as user-defined new-style classes that do not define their own __init__ method (perhaps using new instead). Here are the warnings being thrown-off by the current test suite: /py26/Lib/test/test_array.py:731: DeprecationWarning: object.__init__() takes no parameters array.array.__init__(self, 'c', s) /py26/Lib/copy_reg.py:51: DeprecationWarning: object.__init__() takes no parameters base.__init__(obj, state) /py26/Lib/test/test_descr.py:2308: DeprecationWarning: object.__init__() takes no parameters float.__init__(self, value) That's one way of looking at it. You could also say that it found two legitimate problems: - since array doesn't define __init__() there's no point in calling it - similarly, float doesn't define __init__() The copy_reg warning is more subtle, and needs a work-around. I've checked in all three fixes. Can this be closed?
http://bugs.python.org/issue1683368
crawl-002
refinedweb
1,646
64.61
5 Design Overview¶ 5.1 Modeling¶ Optimizer API for .NET .NET .NET. Creating an environment and task Optionally, an interaction with MOSEK using Optimizer API for .NET Task.cs // // The most basic example of how to get started with MOSEK. using mosek; using System; public class helloworld { public static void Main() { double[] x = new double[1]; using (Env env = new Env()) { // Create Environment using (Task task = new Task(env, 0, 1)) { // Create Task task.appendvars(1); // 1 variable x task.putcj(0, 1.0); // c_0 = 1.0 task.putvarbound(0, boundkey.ra, 2.0, 3.0); // 2.0 <= x <= 3.0 task.putobjsense(objsense.minimize); // minimize task.optimize(); // Optimize task.getxx(soltype.itr, x); // Get solution Console.WriteLine("Solution x = " + x[0]); // Print solution } } } }
https://docs.mosek.com/10.0/dotnetapi/design.html
CC-MAIN-2022-27
refinedweb
124
54.79
A); This is just an initial version of the plugin. There are still some limitations: synchronizedto 2.1.0 Exceptioninstead of Erroras per type in Dart 2 example/README.md Demonstrates how to use the flutter_billing plugin. For help getting started with Flutter, view our online documentation. Add this to your package's pubspec.yaml file: dependencies: flutter_billing: ^0.4.0 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:flutter_billing/flutter_billing.dart'; We analyzed this package on Jun 12, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/flutter_billing.dart. Run flutter format to format lib/flutter_billing.dart.
https://pub.dev/packages/flutter_billing
CC-MAIN-2019-26
refinedweb
149
52.56
#include <aflibAudioFile.h> Inheritance diagram for aflibAudioFile:: This class is the object wrapper that can make an aflibFile object useable as an audio object and to be used in a chain. For using file and device objects in a chain this is the API that one should use. The constructors are the same as the base class. Function compute_segment is implemented so that the base aflibAudio class can process data throught this class. There are three constructors with this class. One with no aflibAudio parent and the other two with parents. When starting a chain with an audio source the constructor without a parent should be used. When at the end of a chain one of the two constructors that require an audio aflibAudio object should be used.
http://osalp.sourceforge.net/doc/html/class_aflibAudioFile.html
crawl-001
refinedweb
128
72.26
Hi I've already read data in another file which is an array, which is now in a string format, what I want to do is change the array into an integer so that I can add the elements of that array. Code given below import java.io.BufferedReader; import java.io.DataInputStream; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStreamReader; /* * @author kagisoboikanyo */ public class ReadInput { public static void main(String[] args) throws FileNotFoundException, IOException { try{ FileInputStream fstream = new FileInputStream("superIncrease.txt"); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; in.close(); } catch (Exception e){ System.err.println("Error:" +e.getMessage()); } } //System.out.println("Test: " + knapsack.toString()); } As I said earlier I am able to view the elements which is an array, but its in a string format now what I need to do is change it to an integer then add the elemnts together to find a sum. Thanks in advance.
https://www.daniweb.com/programming/software-development/threads/455325/changing-a-string-to-an-integer
CC-MAIN-2018-43
refinedweb
166
51.34
We have a requirement to send desktop alerts to various users (compliance, production) across a network when other users have submitted content online for a report. At present we are using NET SEND but this has no guarantee of delivery and has proved unreliable from both client and server perspective (and I gather will be unsupported in later versions of Windows; we are currently running XP). We are considering a Jabber-based solution but has anyone used a Jabber client to pop up alert messages on the screen like NET SEND does, as opposed to just bringing a chat window to the front or displaying a temporary 'toast' message near the system tray. We need the alert message to be persistent and only dismissed by the user, indicating they have seen it. Toast-style pop-ups would be fine as long as it was not only for a limited time and again had to be dismissed by the user. Any solutions? We have something similar in house. We use the Miranda IM client with the notifyanything and popup plugins. Notifyanything allows the client to receive udp messages on a specified port. Popup does just that, displays the message in the window at the top of the users screen. In our case, everything is on the internal network, so loss of UPD packets is not a concern. Here is an example of the python script we run to send the udp messages from the servers to the users: #!/usr/bin/python import socket, sys hosts = ( ('10.0.0.1', 15000), ('10.0.0.2', 15000), ('10.0.0.3', 15000), ) def send(txt): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) for h,p in hosts: s.sendto(txt, 0, (h,p)) del s if len(sys.argv) > 1: s = "\n".join(sys.argv[-2:]) send(s) I came across this exact issue. The goal was to deliver each alert along an escalation pathway - sending the alert to the next person in the list if it was not acknowledged in a given timeframe. We determined that Jabber was the best solution, but that to do it right we had to extend the protocol or investigate more clients. (The protocol lends itself very well to extension and there are countless clients available). This catch was because it was frequently desirable to acknowledge some alerts but not others. For example. An alert's ultimate path: Send to admin A via Jabber. No acknowledgement after 5 minutes, sent to admin B via Jabber. No acknowledgement after 5 minutes, sent to admin A via SMS. No acknowledgement after 5 minutes, sent to admin B via SMS. No acknowledgement after 5 minutes, sent to admin A and B's manager via Jabber. No acknowledgement after 5 minutes, sent to admin A and B's manager via SMS. Manager evaluates the alert, acknowledge it or phones admin A or B. The catch is that if a second alert is generated in the middle of this process, admin A or B may wish to acknowledge it, but not acknowledge the first alert. For example, if they're busy with a separate issue that generated the other alert, or if they're not near a computer, know that the second alert is not serious, but that the first alert needs to be handled by someone near a computer and the escalation mechanism is the most efficient way to find the right person. There were two types of message delivery in Jabber. (I believe called normal vs chat) It's possible that one of the two types allowed a differentiation in which message was responded to. Unfortunately, the messaging type that might have allowed for this caused extreme inconvenience with the clients we tested if a large flood of messages were received. (Also I'm not sure whether the people testing determined whether it was possible to indeed differentiate what was being responded to, due to this issue overwhelming the testing). As it was exploratory and we didn't really have time to implement a full solution, we didn't determine whether the problem was just choosing a better client or whether extensions to the protocol were necessary. I still think Jabber is the best method to deliver alerts. For any alert delivery/escalation system, a person who acknowledges an alert should take ownership of the alert, and there should be repercussions for everyone failing to acknowledge an alert. This has to work with the system understanding the best way to reach a person, an on-call rotation, the risk of alert floods, the issue of alerts created by a person who is currently out of the rotation, and any political considerations caused by an alerting system that accidentally creates accountability if the existing system has none. I'm sorry this isn't an exact answer to your question (about Jabber), but you might want to check out ReachAlert. This would stop people from corrupting your jabber implementation since they could decide to use it for something else (chatting, sending messages to other users). I also agree about net send. It's going away and is common practice to disable it since it was abused for spamming. Let me know what you think and how it goes ;-) This may be a better question to ask over at Stackoverflow, but it does look like there are python and perl libraries for Jabber, so this should be possible. Have used PSI jabber client. It has popup notifications along with sound notifications. Same goes for JAJC jabber client. I think this can achieved trought a monitoring solution like ZAbbix, you can trigger an action upon any remote events by remote scripting, once the action has been triggered Zabbix can do escalations. Actions can be running remote script, sending email or alerts via jabber until the trigger is acknoledged. By posting your answer, you agree to the privacy policy and terms of service. tagged asked 4 years ago viewed 1126 times active
http://serverfault.com/questions/19586/using-jabber-to-send-network-messages/21854
CC-MAIN-2013-48
refinedweb
997
61.36
The great Angular component experiment. So I got this idea. What if I did an experiment. What if I found a way to make Angular use components heavyily inspired by React JS, and to boot, a strong FLUX pattern. Would the typical Angular developer see the benefits of React JS and FLUX? And even more interesting, could React JS and FLUX learn something from Angular? Enforcing the experiment If you are an Angular developer you probably already know that the Angular team is encouraging you to use directives as a component concept, preparing you for the next versions of Angular where they will indeed use components. If not, read this article. You can also see an introduction to components in Angular 2 here. React JS will have a huge impact on the JavaScript community, it already has really. It is not just a View layer for writing DOM. It is a generic JavaScript component concept that can be used to render anything. Canvas, WebGL, Gibbons (netflix TV platform), iOS native, Android native etc. FLUX on the other hand is a pattern for handling state. In my opinion it is currently evolving in the direction of being a single state store you “inject” into your main application component. This single injected state store and React JS allows you to render the whole application server side in an initial state and deliver it over the wire, where React JS takes over. It does not matter what UI technology is underneath, as what you send over the wire are the calculated operations needed to put your app in the correct state. That being an HTML string or a data structure for a native layer. So to sum up. I want to bring React JS and FLUX concepts into Angular so that you can see the benefits in an environment an Angular developer can more easily relate to. It is also interesting to see how concepts in Angular actually merges quite well with FLUX. Maybe FLUX could take some inspiration from Angular? Last but not least I have been working on flux-angular which could benefit from using the evolved FLUX pattern introduced here. Follow that discussion on this issue. Nailing the concepts First we have to settle on the concepts. At no surprise we will of course have a component concept. It will look a lot like a React JS component. But we also need the FLUX parts. Since Facebook released information on how they implement FLUX it has evolved quite a bit. I wrote an article about Baobab which is a single event emitting and semi-immutable state tree. I will take those concepts and bring it into the experiment. This specifically results in an actions concept and a store concept. You can play around with the experiment, try out the syntax and build something by going to this repo. I encourage you to try it out, get into the mindset of components and FLUX. I promise it will at least give you inspiration on how you think about building applications. The demo application is the todomvc.com todo application. The requirements There are a couple of things that Angular fundementally does differently, which we have to find a way to handle. Immutability There is one very important part we have to handle to make this experiement work. Completely immutable state in the store. So what does that mean? And why do we need it? Immutability, in this context, refers to that Angular, or you for that matter, can not change objects and arrays located in the stores directly. This solves two things. First of all it prevents Angulars two-way-databinding to interfere with our application state. With a FLUX pattern we need full control of state flow. Secondly it prevents Angular from changing application state with e.g. hashes in ng-repeat and yourself from making changes that will affect other parts of your application using the same state. The before mentioned Baobab is only semi-immutable, meaning that a change in the tree will indeed change references, but the state grabbed from the store can still be mutated and affected by other parts of your code. So what to do? I started looking at Freezer, which is a great project. Freezer is the same concept as Baobab, a state tree, but at the same time is completely immutable. Whenever you do a change to the state tree, you get a completely new state tree. Though the concepts are great I met a couple of issues, so I decided to create my own immutable-store. It is running under the hood of this implementation and gives a very simpe API to change the state of your application, without allowing Angular or components interfere with that state. In other words, your app and UI state gets very predictable. Monkeypatching For this experiement to work it has to look like something the Angular team could have come up with themselves, if they had a split Google/Facebook personality :-) This means we will add three new methods to Angular modules, in addition to the existing controller, directive, factory, service etc. Those methods are: component, actions and store. Lets implement! So first of all we need our component. Let me just throw the code down there and then we will go through some concepts: angular.module('app', ['experiment']) .component('myComponent', function () { return { message: 'I am a component', componentWillMount: function () { // Runs in the pre link lifecycle (before children are rendered) }, componentDidMount: function () { // Runs in the post link lifecycle (after children are rendered) }, render: function () { // Using the ES6 multiline syntax return ` <h1>Hello world - {{message}}</h1> `; } }; }); Using this component would look like this: <my-component></my-component> What we have basically done here is wrap the generic and powerful directive concept into a component specific concept, inspired by React JS. Let us explore this component a bit more. Changing internal state To change the state of the component you can define a method, call it and just change the state. When changing the state of the component you do want it to render again. Two-way-databinding is powerful in that sense. With React JS you would have to specifically tell the component to update with a method. Take a look at the this example: angular.module('app', ['experiment']) .component('myComponent', function () { return { message: 'I am a component', color: 'red', flipColor: function () { this.color = this.color === 'red' ? 'blue' : 'red'; }, render: function () { return ` <div> <h1 style="color: {{color}}">Hello world - {{message}}</h1> <button ng-Flip color</button> </div> `; } }; }); And with React JS: var MyComponent = React.createClass({ getInitialState: function () { return { message: 'I am a component', color: 'red' }; }, flipColor: function () { this.setState({ color: this.state.color === 'red' ? 'blue' : 'red' }); }, render: function () { return ( <div> <h1 style={'color:' + this.state.color}>Hello world - {this.state.message}</h1> <button onClick={this.flipColor}>Flip color</button> </div> ); } }); Props (attributes) <my-component</my-component> And our component: angular.module('app', ['experiment']) .component('myComponent', function () { return { render: function () { return ` <h1> Hello world <span ng- - {{props.message}} </span> </h1> `; } }; }); So we have the well known attributes concept here, but it works a bit differently. In React JS we call these attributes props and they are normal JavaScript expressions passed to the component. Angular requires you to hardwire this relationship with a “scope” definition in a directive. It is very confusing that some attributes are evaluated as JavaScript and some are not. In my opinion it is better to define this when passing the attribute, not receviving it. Now you clearly see what is JavaScript and what is just a string. But where is logMessage() defined? This is where things start to become a bit interesting. The properties you add to your components in this experiement are attached to the scope of the component. The component pretty much IS what you earlier thought of as $scope. Now scope in Angular is actually a really cool concept. It creates a decoupled relationship between the components. What this means in practice is that if some component used myComponent defined above the component would only need to define a logMessage method and it could be used. Let me show you: angular.module('app', ['experiment']) .component('myParent', function () { logMessage: function (message) { console.log(message); // Will log: "whatup" }, render: function () { return ` <my-component</my-component> `; } }) .component('myComponent', function () { return { render: function () { return ` <h1> Hello world <span ng- - {{props.message}} </span> </h1> `; } }; }); What is not so good about this is that it is harder to reason about how the logMessage method is being called. I would prefer passing the function as a prop just like React JS, but that is not possible with Angular without the reverted hard wiring we want to avoid. React JS has pretty much the exact same syntax, though it is transpiled into normal JavaScript. No parsing and evaluating, but you do have to pass the callback: <my-component showMessage={true} message="Passing props to a component" logMessage={this.logMessage}></my-component> And you would build your component something like this. Allowing for more of a traditional JavaScript mindset. var MyComponent = React.createClass({ renderMessage: function () { return ( <span onClick={this.props.logMessage.bind(null, this.props.message)}> - {this.props.message} </span> ); }, render: function () { var message = this.props.showMessage ? this.renderMessage() : null; return ( <h1>Hello world {message}</h1> ); } }); So this shows you clearly that React JS is “JavaScript first”. You solve dynamic behavior of your UI using traditional JavaScript, not HTML attributes. This is also partly the reason why React JS is so fast. Building a small app So now we have begun to look into how a component works. Now let us see why components are great! Let us create, yes, a todo application. What we need first is our main app component. I will be writing the HTML inside the components, which you would do with React JS. I hope this will show you why it is a good idea to have your tightly connected HTML with the component logic. I will be using ES6 multiline string to write the HTML. This is to give you an impression of how it would look like using React JS where you actually would write HTML (JSX). angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { return { render: function () { return ` <div> <h1>My awesome Todo app</h1> <todo-creator></todo-creator> <todo-list></todo-list> </div> `; } }; }); Our first component is our wrapper for the application. It holds the title and fires up two other components. angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function () { return { title: '', createTodo: function () { // Add todo coming soon... this.title = ''; }, render: function () { return ` <form ng- <input ng- </form> `; } }; }); So now we have created a completely isolated component. It does not depend on anything and you can use it as many times as your want throughout your application. Or just move it somewhere else, if you wanted to. Now you start to see how small these components are. The fear of putting lots of HTML inside your JavaScript is not really valid when you think components. They are small, pure and specific. Moving on to the list: angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function () { ... }) .component('todoList', function () { return { render: function () { return ` <ul> // Todos coming soon... </ul> `; } }; }); And now lets create a component for each todo. Again, we see how the scope helps us create a relationship between parent and child components. When we insert the todoItem below with ng-repeat on our todos, Angular implicitly creates a scope for each todoItem component. Think of it as Angular pre-attaching todo and $index. So we can just start using it: angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function () { ... }) .component('todoList', function () { ... }) .component('todoItem', function () { return { remove: function () { // Removing a todo coming }, toggle: function () { // Toggling a todo coming }, render: function () { return ` <li> <input type="checkbox" ng- {{todo.title}} <button ng-remove</button> </li> `; }; }; }); So now you see how we think very differently than one would traditionally with Angular. We are thinking each part of our application as a very focused and isolated component, instead of thinking our application as a piece of HTML and adding behavior to it. It is more JavaScript first, than HTML first. Our render methods are returning a UI tree description which happens to be HTML. If we used React JS it would use this render method several times to figure out if the returned tree had changed. When changes are detected a specific operation to sync that change with the actual UI layer would be triggered. This is not possible with Angular of course, but now you start to see why React JS is so extremely fast. Store So what about our todos? Where do we want to put them? In traditional Angular you would probably put them into a controller, or maybe a service if you are thinking scalability. We are going to use a concept called a store. A store holds some state for a section of your application and acts like a branch on the application state tree. This allows you to access any state anywhere in your application and you can prepare all the state on your server on the initial load. Just put it into the application state tree and you are ready to go. Think of the state tree as the puppet master of your application, and the components as puppets. You should be able to force any UI state in your application just by changing properties in the state tree. angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function () { ... }) .component('todoList', function () { ... }) .component('todoItem', function () { ... }) .store('todos', function () { return { list: [] }; }); Thats it! We have now made todos.list available to all current and future components, but none of them are able to change either the list or the todos directly. Lets take a look at how we use this state in our todos list: angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function () { ... }) .component('todoList', function () { return { render: function () { return ` <ul> <todo-item</todo-item> </ul> `; } }; }) .component('todoItem', function () { ... }) .store('todos', function () { ... }); As we learned each component in the ng-repeat will have todo and $index attached to it. This means that the todo is available inside the todo-item component. Actions So lets look at how we would change the state of our store. In this experiment we are exposing a new method on the angular module called actions. It is pretty much just a factory: angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function () { ... }) .component('todoList', function () { ... }) .component('todoItem', function () { ... }) .store('todos', function () { ... }) .actions('todosActions', function (flux) { return { add: function (title) { var store = flux.get(); store = store.todos.list.push({ title: title, completed: false }); flux.set(store); }, remove: function (todo) { var store = flux.get(); store = store.todos.list.splice(store.todos.list.indexOf(todo), 1); flux.set(store); }, toggle: function (todo) { var store = flux.get(); // The todo is an object from the store. All mutations on store // objects returns the store store = todo.set('completed', !todo.completed); flux.set(store); } }; }); And now let us update components using the actions: angular.module('TodoMVC', ['experiment']) .component('todoMvc', function () { ... }) .component('todoCreator', function (todoActions) { return { title: '', createTodo: function () { todoActions.add(this.title); this.title = ''; }, render: function () { return ` <form ng- <input ng- </form> `; } }; }) .component('todoList', function () { ... }) .component('todoItem', function (todoActions) { return { remove: function (todo) { todoActions.remove(todo); }, toggle: function (todo) { todoActions.toggle(todo); }, render: function () { return ` <li> <input type="checkbox" ng- {{todo.title}} <button ng-remove</button> </li> `; }; }; }) .store('todos', function () { ... }) .actions('todosActions', function () { ... }); So there we have it. Our application using components and an immutable state tree. Summary If you are an Angular developer I hope this little experiment gave you some insight into why React JS developers loves thinking components. Angular 2 will have a very similar component concept, though they will still rely on templates. In my opinion you are missing out on one of the really good parts of components. Having UI logic and description in one and the same file. That said, maybe Angular has a different “team target”. It is of course easier for a team with split HTML/CSS knowledge and JavaScript knowledge to build Angular apps. But as the complexity of web applications will just increase, I think a pure HTML/CSS developer will be a thing of the past. Hopefully Angular 2 will allow JSX, or at least something similar. If you know a bit about React JS and specifically FLUX it is interesting to look at Angulars $rootScope. You can “inject” state at the top of your application and make it available to all components. This is something React JS is not able to do. You have to pass that “injected state” as properties down through your components, making them very dependant of each other. The challenge with bringing a “scope concept” into React JS though is that you might have a parent component that depends on one state, and a child component depending on an other. Since rendering a component is determined by a change in the dependent state you would get into situations where the child would not update, due to the parents state did not change. Hopefully some very smart people at Facebook is working on this :-) Okay, so thanks for going through this experiement and hopefully it had some value to you!
https://christianalfoni.herokuapp.com/articles/2015_02_22_The-great-Angular-component-experiment
CC-MAIN-2019-35
refinedweb
2,881
57.06
Chat Topic: Windows XP Expert Zone ChatDate: Tuesday, July 25, 2006 Please Note:Portions of this Chat have been edited for clarity. LanceZ [MSFT] (Moderator):Welcome to the Windows XP Expert Zone chat. Today you will have the opportunity to interface with people from Microsoft, and you will get to ask the experts about any issues you may be having. We also welcome constuctive feedback and ideas, so please feel free to speak up! :) LanceZ [MSFT] (Moderator):I'll begin by making introductions of our experts today by starting with myself. I'm Lance Zielinski, I manage the Windows SDK test lab. Introductions db [MSFT] (Expert):Hi, my name is Dale and I am a Lab Engineer in the Windows SDK group. Ari [MSFT] (Expert):Hello all, My name is Ari Pernick and I'm a Test Developer in Windows Networking on stuff like WinHttp, WinInet and Http.sys. Durga Gorti[MSFT] (Expert):Hi, my name is Durgaprasad Gorti (dgorti). I am the development lead for the System.Net namespace in the .NET framework. I own the .NET framework networkig components like Sockets, TCP, HTTP, FTP, SMTP, etc gregoryh [MSFT] (Expert):Hi, my name is Greg Hartrell. I'm a Program Manager in the Security Technology Unit working on future protection technologies. Questions related to security are my specialty. So I'm happy to field questions about windows firewall, access control and malicious code and other security issues. But feel free to ask any type of question. 0_o ansgar [msft] (Expert):Hello, my name is Ansgar Grosse-Wilde, I work in the localization, mainly for German. Jeff [MSFT] (Expert):Howdy - I'm Jeff Chrisope, a developer on the Windows SDK working on various cross-technology demos and samples. My primary responsibilities currently are the "Vista Bridge" samples - .NET managed APIs that expose many of the new-to-Vista native APIs. Jonathan [MSFT] (Expert):Hi My name is Jonathan and I am a Tester in WMI. Kevin [MSFT] (Expert):Hi, my name is Kevin Litwack, and I'm a developer and security engineer in the System Integrity team, currently working on the BitLocker Drive Encryption feature of Windows Vista. LuisMC [MSFT] (Expert):Hello All, my name is Luis Martinez. I'm a test dev in Windows Networking. Mostly focus in QOS and qWave Milan [MSFT] (Expert):Hello. I am Milan Lathia and am currrently working with Enterprise Engineering Center as a Program Manager. We work with many of our technologies including Clients (2000, XP, ..), Servers (2003, ...) and so on. Max [MSFT] (Expert):Hi! I am Massimiliano Gallo and I am a Software Design Engineer in Test. I work with Windows International Sustained Engineering LanceZ [MSFT] (Moderator):We have a lot of experts today, so please take advantage of this and ask the experts what's on your mind. Constructive, of course. ;) Start of Chat Ari [MSFT] (Expert):Q: I am using IE 7 Beta 3... and i went to a web page... and SSL page... and i got a warning sayingthat my INTRANET is now turned off... what is that all about? I never seen that with IE7 beta 2A: Intranet settings are less ecure settings meant for corperate networks. IE now detects the lack of a corperate network and turns off the INTRANET zone, for more information see Jeff [MSFT] (Expert):Q: Is the problems with the drivers in vista ever going to be fixed?Im currently in alot of public betas , and this problem really holds me back.A: Hey Drew - yup, the driver issues in Vista are one of the most important elements of roll-out on the Windows team's radar. Internally, when we upgrade to new builds for testing, we need to go through the same hoops as y'all in the public - which forces us to feel the same pain and fix the same issues. So fear not, we are making progress every day in this area. :) LanceZ [MSFT] (Moderator):Q: What is the topicA: This is a Windows XP General chat. If you've got an issue you're experiencing with Windows XP, or feedback, this is the forum where you can ask the experts. gregoryh [MSFT] (Expert):Q: My disk defrag keeps saying, could not defrag some files, but the list is empty, it has always worked beforeA: which disk defrag tool are you using? ansgar [msft] (Expert):Q: My disk defrag keeps saying, could not defrag some files, but the list is empty, it has always worked beforeA: can you determine if system files are included in the list of files you want to fragment? LanceZ [MSFT] (Moderator):Please make sure you keep it to a question, or a followup to a question when using the "Ask the Experts" option, this helps us properly track the issues brought up today. gregoryh [MSFT] (Expert):Q: My disk defrag keeps saying, could not defrag some files, but the list is empty, it has always worked beforeA: If the error message is related to not being able to access part of the drive, you will need to run a chkdsk on the drive you are trying to defragment first. Go to a command line, and run "chkdsk c: /f" and you'll likely need to reboot. tom (Expert):Q: Is the problems with the drivers in vista ever going to be fixed?Im currently in alot of public betas , and this problem really holds me back.A: DrewMarin, can you describe the driver problem your having? Jeff [MSFT] (Expert):Q: I just downloaded the beta version on ie and as soon as I tried to use it an error occurs and it shuts down. I had not trouble with previous version. Any idea on how to fixA: What error are you getting, TJ? Milan [MSFT] (Expert):Q: Can You Upgrade The CPU In A Laptop?A: Hello. It generally depends on the manufacturer of the laptop. Have you checked the manual that came with the laptop? LanceZ [MSFT] (Moderator):Medawen, please keep your questions to questions. :) LuisMC [MSFT] (Expert):Q: I keep getting TV tuner not istalled when trying to use my WMC tv tuner & radio. I have tried to reinstall SP2 along with the 1.1 .NET Frame something. Any more suggestions?A: Is the driver of your tunner card supported in MCE? LanceZ [MSFT] (Moderator):Farmsale, please ask your question again, using the "Ask the Experts" option so our folks may address your issue. Thanks! Milan [MSFT] (Expert):Q: I did Not Get A Manual With My LaptopA: Hmmm .... okay - generally you should be able to do that if you really want to. However, you will need to know that the OEM will not be supporting it and all warranty will be void. You will need to check with the motherboard specs and find a compatible proc that you can replace with. gregoryh [MSFT] (Expert):Q: errors were foundA: ok, you definitely need to run chkdsk with the /F switch. This will require a reboot, but it will cause Windows to fix those errors before it starts up again. After that, you can do you defrag. LuisMC [MSFT] (Expert):Q: yeah it should be. It came with the computer and worked before.A: Ok, you can try System Restore to get your Media Center to the state where it was working and then go to Windows Update and try to avoid an update driver that might be the issue gregoryh [MSFT] (Expert):Q: chkdsk cannot continue in read only mode.. now whatA: You need to use the /F switch from the command line. e.g. chhdsk c: /f Milan [MSFT] (Expert):Q: Help removing program from my computerA: Hi Lynn ... can you give me more details on the problem? tom (Expert):Q: tom Vista had my 1394 etherntet connection not installed, my ethernet controller not installed, and my build in wireless , so I tried installing them by getting the drivers in xp and saving them to vista's partition but after installing them it didnt workA: If you go to Device Manager (do a devmgmt.msc from cmd.exe) do you see that those devices are banged out with yellow exclamation? If they are can you tell me what those devices are? db [MSFT] (Expert):Q: we had a problem here with a foreign disk issue- can xp home access a foreign drive or only xp pro, how is this doneA: gregoryh [MSFT] (Expert):Medawen: excellent. BTW, only add questions when you have a question. For general comments, come back to chat 8D LuisMC [MSFT] (Expert):Q: Tuner card is a Hauppauge WinTV PVR PCI II. Will system restore just restore WMC or do I need to worry about losing files?A: It will restore the OS to a previous state and will not mess with your files, however backing up is always a good idea :) db [MSFT] (Expert):A: Sorry Dennis...check out. LanceZ [MSFT] (Moderator):We've got plenty of bandwidth to answer questions. Please feel free to ask by clicking "Ask the Experts" and sending your question to us! db [MSFT] (Expert):Q: we had a problem here with a foreign disk issue- can xp home access a foreign drive or only xp pro, how is this doneA: The Foreign status occurs when you move a dynamic disk to the local computer from another computer running Windows 2000, Windows XP Professional, or . Foreign status can also occur on computers running Windows XP Home Edition that are configured to dual-boot with another operating system that uses dynamic disks (such as Windows 2000 Professional). Dynamic disks are not supported on Windows XP Home Edition or on portable computers. A warning icon appears on disks that display the Foreign status.To access data on the disk, you must add the disk to your computer's system configuration. To add a disk to your computer's system configuration, import the foreign disk (right-click the disk and then click Import Foreign Disks). Any existing volumes on the foreign disk become visible and accessible when you import the disk. For instructions describing how to move and import disks, see To move disks to another computer </resources/documentation/windows/xp/all/proddocs/en-us/dm_add_disk.mspx> Ari [MSFT] (Expert):Q: Tuner card is a Hauppauge WinTV PVR PCI II. Will system restore just restore WMC or do I need to worry about losing files?A: If system restore doesn't work, Hauppage has good driver reinstall tools and instructions: db [MSFT] (Expert):A: However, you cannot access data on the disk if you are running Windows XP Home Edition. To use the disk on Windows XP Home Edition, you must convert it to a basic disk db [MSFT] (Expert):Q: ok, checked that link before, was what I thought, thanksA: Sure. Ari [MSFT] (Expert):Q: hello experts i have a issue with my registry windows xp proA: Joey: What type of issue? tom (Expert):Q: tom after trying to get it working time after time I erased vista of that laptopA: DrewMarin, it can be frustrating at times to try out beta software. What is the make and model of your laptop? LuisMC [MSFT] (Expert):Q: im trying to set up my media center and my tv signal comes and goes, if i use my vcr if i pause or slow motion i see the movie but otherwise snow then becomes clearA: It's difficult to say, it depends what kind of signal are you entering to your tunner card? cable (coax), Video/Audio? And then try to go to Settings and reset your TV settings gregoryh [MSFT] (Expert):angeleye - ok, describe your problem with the registry (add a questions) LanceZ [MSFT] (Moderator):Q: by the way whoever made windows live web mail thing tell them great job I signed up for the beta toA: I've got some contacts in Windows Live, I will be glad to pass that along. :) Glad you like the beta! gregoryh [MSFT] (Expert):Q: by the way whoever made windows live web mail thing tell them great job I signed up for the beta toA: great feedback, they'll be glad to hear this. they also have a feedback form on their site that they monitor regularly. LanceZ [MSFT] (Moderator):Q: ok its the system 32 proble on the startup familiar??A: Joey, can you please tell us what your issue is? We're not quite sure what you are asking. gregoryh [MSFT] (Expert):angeleye - ok, have you already tried activating? Durga Gorti[MSFT] (Expert):RobC, gregoryh [MSFT] (Expert):Q: Ok back from chkdsk, srtill havign the problemA: ok, can you paste/type out the exact error message? tom (Expert):A: Are you using a card-bus 1394 card or is it built-in? I'm still trying to find the specs for that model. Ari [MSFT] (Expert):Q: sure when i start up windows i get the sysytem 32 file on the screen everytimeA: Do you see this in a full screen text mode, or is this appearing in a window? Durga Gorti[MSFT] (Expert):RobC, You can perhaps use the RunAS commandto do this open a command prompt, use runas /user:Administrator cmd.exe type in the passwordUnder the new command prompt you can open the gameand close the cmd prompt LanceZ [MSFT] (Moderator):Q: only thing is I used the windows live mail desktop and I cant see my hotmail inbox? I can see everything else and I can see everything in my gmail account but other than that its niceA: Drew, I will pass your concerns to the Windows Live team. My band's rhythm guitarist is one of their test leads. :) gregoryh [MSFT] (Expert):angeleye - it sounds like there's a problem with your activation code. Where's your code right now? (on the back of the box?) Durga Gorti[MSFT] (Expert):RobC, However you need to do this everytime she needs to play Durga Gorti[MSFT] (Expert):RobC --> Yeah I realized that Ari [MSFT] (Expert):Q: its in a window which i can closeA: joey: Can you tell us the exact message? Durga Gorti[MSFT] (Expert):Durga Gorti[MSFT] (Expert): RobC --> There might be other ways to do this tom (Expert):Q: tom I have a hp pavilion ze5375 Us and its the 512mb ram model(it comes also in 256mb ram )A: DrewMarin, what kind of 1394 card-bus are you using? LanceZ [MSFT] (Moderator):Rob, can you follow up with that on the "Ask the Experts" part so we can track it better? :) LanceZ [MSFT] (Moderator):28 minutes to go! Plenty of time to ask more questions. Ask away!! gregoryh [MSFT] (Expert):angeleye - you should try activating over the phone. Do you see that option when attempting to activate? gregoryh [MSFT] (Expert):Medawen: I'm looking up some information for your issue.. Stand by... :D Durga Gorti[MSFT] (Expert):RobCIf your daughter is not a super expert, you could possibly use a batch file to automate this. She simply clicks on a link. You could possibly even encrypt the password Max [MSFT] (Expert):Q: this should explainA: Hi, you should use the registry editor to solve the problem:start|run|regedit tom (Expert):Q: tom I have a hp pavilion ze5375 Us and its the 512mb ram model(it comes also in 256mb ram )A: DrewMarin, I just found the specs for it and I see that it has a built-in one. By the way, what version of Vista Beta were you trying? Max [MSFT] (Expert):Q: Just that sir?A: are you familiar with it? gregoryh [MSFT] (Expert):Q: Defragmentation is complete for: (C:) Some files on this volume could not be defragmented. Please check the defragmentation report for the list of these files.A: It appears that if the list is empty, then there's no worries. You can safely ignore this, as long as that list is empty. Durga Gorti[MSFT] (Expert):Yes gregoryh [MSFT] (Expert):Q: Any other steps i should take to get my disk defrag up and running?A: It looks like your defragmentation succeeds. Let me know if that's not the case. Durga Gorti[MSFT] (Expert):RobC --> Yes she can run the batch file as a power user Durga Gorti[MSFT] (Expert):RobC--> Ari kidnly pointed out to me that you could possibly get some help from the webpage Durga Gorti[MSFT] (Expert): Jeff [MSFT] (Expert):Q: Do You Know When SP3 Will Be Out?A: The last public release date ("preliminary") for SP3 for both XP Home and Pro was second half of 2007. db [MSFT] (Expert):Q: Do You Know When SP3 Will Be Out?A: And here's a public link - gregoryh [MSFT] (Expert):Q: Also recently another computer on my network is going and giving data to me through netbios-ssn.. is this normalA: Are you saying that your firewall is blocking this data? If you are directly connected to the Internet, then this can be normal. However, if someone is copying files to your computer, then it is not. Max [MSFT] (Expert):Q: yes im there already does this sound simple to you?A: I understand it looks quite complicate. Anyway I experienced this problem in the past as well.Mainly you should look in the node paths under RESOLUTION, and verify that there are no null entries. LanceZ [MSFT] (Moderator):20 minutes to go! I'm sure you have a lot of questions, ideas, etc to give the experts! Ari [MSFT] (Expert):Q: Can anyone help me with a Microsoft Office CD?A: What type of problem are you having? tom (Expert):Q: tom I have no idea that is just how it appears under network connections, and I was using 32 bit my cpu dosent support 64 bitA: DrewMarin, your laptop has a built-in 1394 connector so WindowsXP and Vista will automatically set up a network over 1394 connection so that is expected. If you did see the 1394 network connection then the driver was installed. So the problem is that the driver isn't installed for the other network adapter. gregoryh [MSFT] (Expert):Q: norton is saying my lappy is giving me dataA: Do you have the exact message? It sounds like you have a Norton firewall, and it's blocking traffic from other people on your network. I'm sure the Symantec support site can explain how to disable these messages if they become annoying. :D LanceZ [MSFT] (Moderator):Q: Do You Have A Date Of When This Room Is Closing?A: The moderated portion of this chat will clost at 11am PST. Durga Gorti[MSFT] (Expert):RobC-->I am sorry I take that back. Even if you have a batch file, you need to be present to type the password. Because the RunAs takes the password from the command prompt. I am sure that the website i pointed out has some tips. gregoryh [MSFT] (Expert):Q: And i cannot ping the IP its coming fromA: As a general rule, you shouldn't do this. If someone is trying to connect to your computer, you shouldn't "reveal" yourself by pinging them. You might want to create a firewall rule that blocks that IP address completely. gregoryh [MSFT] (Expert):Q: no when i go to my norton LOG it is saying i am reciving data from 192.168.1.01 through netbios-ssn 139A: Ah, this makes sense now. Do you own a router? (e.g. linksys, netgear, etc)? gregoryh [MSFT] (Expert):Q: heres the thing.. usually that IP is my handheld or lappy, but when i ping it it says something like no ip foundA: This is normal if two Windows PCs have file sharing enabled and are on the same network. Netbios tries to discover other Windows PCs to make file sharing easier. gregoryh [MSFT] (Expert):Q: laptop is on wireless, but i am not tryign to get or send dataA: You can disable file sharing on your laptop, or configure your Norton firewall to ignore this if it gets annoying. LanceZ [MSFT] (Moderator):Q: LanceZ I Ment When Is This Chat Loosing The The Regular P2P Option.A: OK, I understand your question now. Many apologies! I do not have a date on this change yet. gregoryh [MSFT] (Expert):Medawen: no problem LanceZ [MSFT] (Moderator):12 more minutes left to the chat! If you have questions, send 'em our way! Max [MSFT] (Expert):Q: Use Registry Editor to view the following two Windows registry keys: how is this done sir???A: once you open the registry editor, the given registry key path, for istance:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Runrepresents a list of the nodes which have to be opened in order to find the keys that could cause the problemSo double-click on HKEY_LOCAL_MACHINE, then double-click on software, and so on till you get to the "Run" nodeHere the (Default) key will be empty but it is normalthe other keys should contain values like a file path (like C:\WINDOWS\system32\SampleName.exe) with a few options (like -s or /logon) tom (Expert):Q: tom but it showed up as non installed hardware , but I also understand its a beta and I cant expect it all to work like most people my age doA: DrewMarin, I have Vista video driver problem with my Dell Inspiron 600m so I feel your pain. :) I think I need you to give me the model of your ethernet device. I assume you're running XP now and can you open cmd.exe and then type in "devmgmt.msc" and press enter to open up Device Manager. Then double click to expand the "Network Adapter" tree. Double click on each of the adapterand open the properties window. Click on the Details tab and then give me the "Device Instance Id"? Max [MSFT] (Expert):Q: Is there anyway you can guide me through this sence that i have not used reg edit too often all of the registry mechanics wont fix itA: Tell me if anything is unclear in the steps I have just given gregoryh [MSFT] (Expert):Peter Brown: I'm looking up an official answer for you. tom (Expert):Q: tom Im not on that pc Im on my server(wow a teenager with knowledge) and that ac cord isnt working right nowA: DrewMarin, I'll need that info to figure out if the device is supported or not. Come back to the next chat with that info and we can get more answers for you. :) gregoryh [MSFT] (Expert):Peter: Jeff has your answer. Jeff V [MSFT] (Expert):Q: Why Do You Have To Uninstall Norton Antivirus Before You Can Install OneCare?A: Peter, this KB article describes the conflicts that may arise when installing OneCare on a machine with antivirus programs currently installed: LanceZ [MSFT] (Moderator):4 minutes left to the chat! tom (Expert):Q: any info on how to update iis?A: DrewMarin, what OS is this? Ari [MSFT] (Expert):Q: Would The Windows Live™ Search beta-mobile Work On A Blackberry 7230?A: I'm not quite sure, but there is a feedback page that you can submit any issues you encounter and blackberry is a phone type: Max [MSFT] (Expert):Q: ok what exactly is the (run node)?A: The regedit application shows you a view which is a tree, and each folder-like icon represents a nodethe "Run" node is the node you have to reach and then left-click on it to view the keys contained on the right paneThe values of these keys have to be examined for the system32 problem LuisMC [MSFT] (Expert):Q: I have qustion about the Yahoo MessengerA: What's your question? :) gregoryh [MSFT] (Expert):Q: Would The Windows Live™ Search beta-mobile Work On A Blackberry 7230?A: The live search mobile beta FAQ has a list of technical requirements for your mobile. Your Blackberry may or may not support it. db [MSFT] (Expert):Q: ok what exactly is the (run node)?A: Joey, you should check out the following KB article -;EN-US;257824. It explains how to use the registry editor. LanceZ [MSFT] (Moderator):OK, we are getting ready end the moderated portion of this chat. I want to thank everyone for coming today, and asking your questions! tom (Expert):Q: tom xp proA: DrewMarin, I don't think there are updates to IIS for WindowXP. gregoryh [MSFT] (Expert):Thanks everyone, bye for now. LanceZ [MSFT] (Moderator):Q: How come i get error messege, when i open the Yahoo Messenger?A: rewss, we are unable to support 3rd party applications. You will need to contact Yahoo's technical support to address that. db [MSFT] (Expert):Q: How come i get error messege, when i open the Yahoo Messenger?A: Sorry Joey, the correct KB article is 293130 LanceZ [MSFT] (Moderator):Our time is up. Thank you for coming. Next chat will be August 29th, same time, same channel! :) Jeff V [MSFT] (Expert):Thanks for coming!
http://www.microsoft.com/windowsxp/expertzone/chats/transcripts/06_0725_ez_xp.mspx
crawl-002
refinedweb
4,168
71.14
Ecuador 🇪🇨 Get import and export customs regulation before travelling to Ecuador. Items allowed to import are 300mL perfume per person, and 600mL perfume per family group. Prohibited items are Any media featuring child pornography. Some of restricted items are Travellers may bring a maximum of 2 live pets into Ecuador. Ecuador is part of Americas with main city at Quito. Its Developing country with a population of 17M people. The main currency is US Dollar. The languages spoken are Spanish. 👍 Developing 👨👩👦👦 17M people chevron_left import export Useful Information Find other useful infromation when you are travelling to other country like visa details, embasssies, customs, health regulations and so on.
https://visalist.io/ecuador/customs
CC-MAIN-2020-16
refinedweb
109
50.73
Here is a listing of C interview questions on “Formatted Input” along with answers, explanations and/or solutions: 1. Which of the following doesn’t require an & for the input in scanf? a) char name[10]; b) int name[10]; c) float name[10]; d) All of the mentioned View Answer 2. Which of the following is an invalid method for input? a) scanf(“%d%d%d”,&a, &b, &c); b) scanf(“%d %d %d”, &a, &b, &c); c) scanf(“Three values are %d %d %d”,&a,&b,&c); d) None of the mentioned View Answer 3. Which of the following represents the function for scanf? a) void scanf(char *format, …) b) int scanf(char *format, …) c) char scanf(int format, …) d) char *scanf(char *format, …) View Answer 4. scanf returns as its value a) Number of successfully matched and assigned input items b) Nothing c) Number of characters properly printed d) Error View Answer 5. What is the output of this C code? #include <stdio.h> void main() { int n; scanf("%d", n); printf("%d", n); } a) Prints the number that was entered b) Segmentation fault c) Nothing d) Varies View Answer 6. int sscanf(char *string, char *format, arg1, arg2, …) a) Scans the string according to the format in format and stores the resulting values through arg1, arg2, etc. b) The arguments arg1,arg2 etc must be pointers c) Both a & b d) None of the mentioned View Answer 7. The conversion characters d, i, o, u, and x may be preceded by h in scanf to indicate a) A pointer to short b) A pointer to long c) Nothing d) Error View Answer 8. What is the output of this C code (when 4 and 5 are entered)? #include <stdio.h> void main() { int m, n; printf("enter a number"); scanf("%d", &n); scanf("%d", &m); printf("%d\t%d\n", n, m); } a) Error b) 4 junkvalue c) Junkvalue 5 d) 4 5 View Answer Sanfoundry Global Education & Learning Series – C Programming Language. Here’s the list of Best Reference Books in C Programming Language. To practice all features of C programming language, here is complete set of 1000+ Multiple Choice Questions and Answers on C.
http://www.sanfoundry.com/c-interview-questions-formatted-input/
CC-MAIN-2017-43
refinedweb
370
68.2
Learn to use GNU Tar compression tool for Unix like systems Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 30 minutes | Coding time: 5 minutes GNU Tar (Tape Archiver) is an open source file compression and decompression tool. In this article, we will explore how to use it along with its different options. We will cover the following sub-topics: - Create a .tar archive file - Extract a .tar archive file - List the contents of archive file - Append files at end of archive files - Creating a single archive file for multiple file systems - Create a .tar.gz archive file - Extract a .tar.gz archive file - Shell script to understand .tar versus .tar.gz compressed files - Checking diff between archive file and source file system - Updating archive after changes in file system Syntax: tar -[Options] filename Options: -A, --catenate, --concatenate appends newly created tar file with a previous version of the tar file. -c, --create: Creates a new .tar or .tar.gz archive -d, --diff, --compare: compares the file system and its archived file --delete: delete files from the archive -r, --append: appends file at end of the tar file -t, --list: show all the files present in the archived tar file -u, --update: adds the files from the file system not present in the archive file -x, --extract, --get: extracts the files from archive to the destination folder. -v, --verbose: verbosely gives summary of each process in execution of tar command. -z, --gzip: creates or decompresses a tar.gz file Note: Append adds a file at end of tar file while concatentation adds another tar at end of the tar file. 1. Create a .tar archive file Syntax: tar -c[v]f /{destination address}/{compressedfilename}.tar {file system} Here, - -c: option is used to create a .tar file - -v: is an optional tag for verbose, i.e., display a summary of each task performed during file compression - -f: tag is used to access files to be archived. - tar creates a compressed file called {compressedfilename}.tar and stores it in the destination address. - The file system to be archived must be specified with the absolute address. Implementation: tar cvf compress.tar /home/nishkarshraj/Desktop/HelloWorld - A HelloWorld directory exists on the absolute path (with respect to root directory) /home/nishkarshraj/Desktop where nishkarshraj is a user in Linux machine. - tar compresses the files into an archive file compress.tar and displays progress verbosely. - Since destination path of the compress.tar file is not specified, it is stored on the current directory of execution of the command, i.e., root directory. - First command ls shows the content of current directory which has no archive files. - tar command compresses the HelloWorld directory in the specified path and creates a compress.tar file in current directory. - ls command entered again shows the existence of compress.tar file in current directory. 2. Extract a .tar archive file Syntax: tar -x[v]f {path to}/{filename}.tar - -x: tag specifies tar to extract the archive file - -v: is an optional tag which displays summary of each process in the tar extraction. - -f: tag fetches each file of the archive to be extracted. - tar tool extracts the filename.tar file in the same folder where the file system previously existed before being compressed. Note: Reason of tar extracting the files in same location they were archived is that .tar files store namespace file rather than filename. Thus, a file called file1.txt stored at /home/Desktop will be stored as /home/Desktop/file1.txt in archive rather than as file1.txt Implementation: tar xvf compress.tar - tar extracts the archive file present in specified path (here, no path is specified in prefix of the tar file, thus current directory is taken) and sends the extracted files in the same filesystem from which they were compressed. Working on the same compress.tar file which was created in Task 1: - First list content of /home/nishkarshraj/Desktop using the ls command and check that it does not contain the HelloWorld Directory. - Use the tar tool to extract the files from archive. - List content of /home/nishkarshraj/Desktop again to verify HelloWorld directory is created. 3. List contents of the archive file It is possible to see the individual files present in the archive file using tar command. Syntax: tar -tf filename.tar - -t option is used to list content of the archive file - -f option fetches each file present in archive file. 4. Append files at the end of archive files It is possible to append files at the end of archive files using tar command. Syntax: tar -rf {filename}.tar {file to be attached} or tar --append -f {filename}.tar {file to be attached} - -r or --append tag is used to append the file specified at end of archive file. Here, following command are used on the Shell: - tar -cvf file.tar /home/nishkarshraj/Desktop/HelloWorld It creates an archive file for specified HelloWorld directory at current location (root /). - ls Lists content of current folder to verify the creation of the archive file highlighted in red color. - echo "test data" >> test.data It creates a new file called test.data with the content "test data" - ls Lists content of the current folder to verify the creation of test.txt file - tar -rf file.tar test.txt Appends the test.txt file at the end of file.tar archive. - tar -tf file.tar Lists the content of the file.tar file showing the newly added test.txt file at end of it. 5. Create a single archive file for multiple file systems Multiple file systems can be compressed into one archive file by the tar. Specify all the file system to be compressed in space separated list after the filename.tar in tar command. Implementation: Here, two directories, HelloWorld/ and Test/ are compressed into a single archive file. 6. Create a .tar.gz archive file Tar tool can be used to create another type of archive file with the extension .tar.gz which follows the GNU Compression algorithm. Syntax: tar -c[v]zf {Destination path}/{filename}.tar.gz {file system} -z: -z option specifies the tar to create an archive file using GNU compression algorithm Implementation: tar cvzf file1.tar.gz /home/nishkarshraj/Desktop/HelloWorld - It creates a file1.tar.gz archive file in current directory (Here root directory, /) - The source file system to be compressed is HelloWorld in the /home/nishkarshraj/Desktop path. - ls command displays that no archive file is present in the current folder (here, root). - tar command compresses the HelloWorld directory in the specified path to a archive file file1.tar.gz in the current path. - ls command entered again displays the newly created .tar.gz archive file. 7. Extract a .tar.gz archive file Syntax: tar -x[v]zf {path to}/{filename}.tar.gz It extracts all files in filename.tar.gz directory and stores them in the source file system. Implementation: tar xvzf file1.tar.gz - ls command for /home/nishkarshraj/Desktop shows that HelloWorld directory does not exists in the path. - tar xvzf on the file1.tar.gz archive file extracts the folder into /home/nishkarshraj/Desktop path. 8. Shell script to understand .tar versus .tar.gz compressed files Generally speaking, GNU compressed archived files with extension .tar.gz are more efficient that normal .tar archive files but this is not true for all file systems. Here, a shell script is created to compress a same file system (HelloWorld directory) using both simple compression and GNU compression algorithm and their respective size are displayed using the du Disk utility command. Code: #!/bin/bash # Simple compression tar cvf file1.tar /home/nishkarshraj/Desktop/HelloWorld du -sh file1.tar # GNU compression tar cvzf file2.tar.gz /home/nishkarshraj/Desktop/HelloWorld du -sh file2.tar.gz Output: Here, - Disk usage of .tar file: 12 Kb - Disk usage of .tar.gz file: 4 Kb Thus, .tar.gz files have higher compression rate. 9. Check diff between archive file and source file system Tar tool can be used to check the difference between the .tar archive file and the source file system. Syntax: tar -dvf {filename}.tar {path of source folder} - -d command is used to see the diff Lets create an archive file from the same HelloWorld directory called file.tar and then change it to check the diff. Creation of file.tar for HelloWorld directory Modify the HelloWorld directory Explanation of the Image HelloWorld directory consists of two files: Intro.md and test.txt - We see the diff between file system and archive file. Since, no modifications are done, diff works as a listing of files in the archive. - We modify the test.txt file by adding "mod" string at the end of file. - We see the diff again and the tar command lists the files in the archive along with message that: mod time differs: Modification time of test.txt in archive file and in the file system differs. size differs: Size of the test.txt file in archive differs from that in the file system. Deleting files from the file system Check the diff if files are deleted from the file system. Explanation of the image: - We remove the test.txt file using the rm command. - We see the diff and it lists the content of the archive file with following message after test.txt Warning: Cannot stat: No such file or directory This message signifies that the test.txt file in the archive is no longer mapped with the file in archive which means the file has been deleted from file system. Creation of new files in filesystem We check the diff of archive and file system by creating new files not existing and thus not mapped in the archive files. Explanation of the image: - We create a new file called new.txt containing a string new - We see the diff but it does not show any output related to new.txt because there was no such file on creation of archive files. Conclusion: diff function maps the individual files in the archive against the original file system to check for modification with respect to modification time and size and also to check if the original files are deleted or not but does not check creation of new files in the same file system. 10. Update the archive file with modified file system Tar can be used to update the archive file to have the same content as that of the modified source file system having diff with it. Syntax: tar -uf {filename.tar} {path to source file system} or tar --update -f {filename.tar} {path to source file system} Explanation: Here, we continue from the last step of diff with a newly created new.txt file and deleted test.txt file as diff with the archive file - We update the tar file with the current state of the file system. - On seeing the diff again: - new.txt file is added. - test.txt file is not removed from the archive even though it is deleted from the main file system. Conclusion: The update command for tar tool updates the archived files if they are modified, adds a new file if created but does not remove archived files that are deleted from the source archive system. References/Further Reading: GNU Tar Manual page for Linux
https://iq.opengenus.org/tar-compression-tool-for-unix-like-systems/
CC-MAIN-2021-17
refinedweb
1,912
67.15
Closed Bug 1398272 Opened 4 years ago Closed 3 years ago tabs .on Updated listener triggers broken consistency of tabs .Tab .id, by a tab moved across multiple windows Categories (WebExtensions :: General, defect, P2) Tracking (firefox-esr6061+ fixed, firefox57 wontfix, firefox59 wontfix, firefox60 wontfix, firefox61 verified) mozilla61 People (Reporter: yuki, Assigned: zombie) References Details Attachments (6 files) When an addon registers any listener for browser.tabs.onUpdated, a tab moved across windows gets new id unexpectedly. Steps to reproduce: 1. Install any addon which registers browser.tabs.onUpdated listener. 2. Open debug console for the addon and select its "Console" pane. 3. Open two browser windows, A and B. 4. Open a new tab in the window A, and click it to activate. 5. Get the id of the tab. For example: ------------------------------------------ browser.tabs.query({ active: true, currentWindow: true }).then(t=>console.log(t[0].id)) ------------------------------------------ 6. Drag the tab and drop into the tab bar of the window B. 7. Click the dropped tab to activate. 8. Get the id of the tab again, by something way like the step 5. Actual result: It reports a new id different from the one before the tab is moved. Expected result: It reports a consistent id same to the one before the tab is moved. Additional information: * This problem doesn't happen if the addon doesn't register any listener for tabs.onUpdated. * This problem affects only in the namespace of the addon which registers any listener for tabs.onUpdated. Other addons still get consistent id for the moved tab. * tabs.onActivated for the tab gets correct consistent id as its first argument. * If you see this problem, the result of browser.tabs.get(consistent id) returns a tab object with wrong (incremented) id. * I've confirmed this on: Nightly 57.0a1 20170908100218 (e10s-enabled) * I couldn't reproduce this problem on Firefox ESR52 (e10s-disabled). These testcase have simple background.js: ------------------------------------------------- // only "testcase-wrong.xpi" has this listener. browser.tabs.onUpdated.addListener((aTabId, aChangeInfo, aTab) => { console.log(`wrong/tabs.onUpdated window${aTab.windowId}-tab${aTabId}`); }); // following listeners are common. browser.tabs.onAttached.addListener((aTabId, aAttachInfo) => { console.log(`wrong/tabs.onAttached window${aAttachInfo.newWindowId}-tab${aTabId}`); }); browser.tabs.onActivated.addListener(aActiveInfo => { console.log(`wrong/tabs.onActivated window${aActiveInfo.windowId}-tab${aActiveInfo.tabId}`); }); ------------------------------------------------- If you install both testcases at same time, you'll see a result that only "wrong" case reports incremented id for the moved tab when you repeatedly move a tab from one to another. Summarized logs on my environment with these two testcases: When I click a tab on the window A: > correct/tabs.onActivated window3-tab2 > wrong/tabs.onActivated window3-tab2 When I move the tab to the window B: > wrong/tabs.onUpdated window12-tab5 <= wrong id > correct/tabs.onAttached window12-tab2 > wrong/tabs.onAttached window12-tab2 > correct/tabs.onActivated window12-tab2 > wrong/tabs.onActivated window12-tab2 > correct/tabs.onActivated window12-tab2 > wrong/tabs.onActivated window12-tab2 > correct/tabs.onActivated window12-tab2 > wrong/tabs.onActivated window12-tab2 When I move the tab to the window A again: > wrong/tabs.onUpdated window3-tab7 <= wrong id > correct/tabs.onAttached window3-tab2 > wrong/tabs.onAttached window3-tab2 > correct/tabs.onActivated window3-tab2 > wrong/tabs.onActivated window3-tab2 > correct/tabs.onActivated window3-tab2 > wrong/tabs.onActivated window3-tab2 > correct/tabs.onActivated window3-tab2 > wrong/tabs.onActivated window3-tab2 Related issue on actual addon: Assignee: nobody → kmaglione+bmo Priority: -- → P2 FYI: I've published a library for the workaround. I think that Bug 1440015, which I have recently reported, may be a duplicate of this bug. I can confirm that Bug 1440015 only happens if there is a tabs.onUpdated() listener. For my Tile Tabs WE add-on, this bug is a big deal, breaking key functionality. The 'tabId' changing, when a tab is moved between windows, is a serious error - and should be a high priority to fix. Assignee: kmaglione+bmo → tomica Comment on attachment 8962198 [details] Bug 1398272 - Prevent onUpdated from breaking tab IDs for adopted tabs Thanks! ::: browser/components/extensions/test/browser/browser_ext_tabs_move_window.js:20 (Diff revision 3) > async background() { > let tabs = await browser.tabs.query({url: "<all_urls>"}); > let destination = tabs[0]; > let source = tabs[1]; // skip over about:blank in window1 > > + browser.tabs.onUpdated.addListener(dummy => { Nit: s/dummy/()/ please. Attachment #8962198 - Flags: review?(kmaglione+bmo) → review+ Pushed by tomica@gmail.com: Prevent onUpdated from breaking tab IDs for adopted tabs r=kmag Status: NEW → RESOLVED Closed: 3 years ago status-firefox61: --- → fixed Resolution: --- → FIXED Target Milestone: --- → mozilla61 Is it possible to fix this issue for the release of Firefox 60 ESR next month ? Verified as fixed in FF 61. Also, I can confirm that it was reproducible in FF59. I will attach before and after fix screenshots. Status: RESOLVED → VERIFIED (In reply to Baptiste Thémine from comment #17) > Is it possible to fix this issue for the release of Firefox 60 ESR next > month ?. It would be good to have it in 60 ESR, as Baptiste asked. It's quite critical for web extensions to keep tabId consistency. And keeping this bug for the next year would be bad. I don't see any "flag approval-mozilla-beta", when I click on details. Where is it? I see only form and submit button. Is it something that comes after submit? It's too late to uplift this to 60. Is it too big of a change for 2018-06-26 ESR 60.1? It is a simple fix, had I noticed i would have requested it last week. TBH I'm uncertain how to uplift into any ESR point release, but that is a good question. Lets see if we can find an answer... status-firefox59: --- → affected status-firefox60: --- → affected status-firefox-esr60: --- → affected Flags: needinfo?(sescalante) Comment on attachment 8962198 [details] Bug 1398272 - Prevent onUpdated from breaking tab IDs for adopted tabs Since it's late in the beta cycle, i'm nominating for esr60 directly: > [Approval Request Comment] > If this is not a sec:{high,crit} bug, please state case for ESR consideration: This is a simple one-line fix that would be good to fix for ESR. > User impact if declined: Any tab-managing extension is partially broken when tabs are moved between windows. > Fix Landed on Version: Nightly while it was at version 61 > Risk to taking this patch (and alternatives if risky): Minimal risk, it's a simple patch that is fully understood and well tested. > String or UUID changes made by this patch: none Attachment #8962198 - Flags: approval-mozilla-esr60? Attachment #8962198 - Flags: approval-mozilla-beta? Not a new issue so probably not something I want to take for 60 RC2. Comment on attachment 8962198 [details] Bug 1398272 - Prevent onUpdated from breaking tab IDs for adopted tabs FYI, this patch doesn't graft cleanly to ESR60. Please attach a rebased patch and re-request approval. Flags: needinfo?(sescalante) → needinfo?(tomica) Attachment #8962198 - Flags: approval-mozilla-esr60? > [Approval Request Comment] see comment #27 Flags: needinfo?(tomica) Attachment #8975961 - Flags: approval-mozilla-esr60? Comment on attachment 8975961 [details] [diff] [review] big-1398272-uplift.patch fix for tab management extensions in 60.1esr Attachment #8975961 - Flags: approval-mozilla-esr60? → approval-mozilla-esr60+ Flags: in-testsuite+ Product: Toolkit → WebExtensions
https://bugzilla.mozilla.org/show_bug.cgi?id=1398272
CC-MAIN-2021-25
refinedweb
1,203
52.36
This took me a while to grok and perhaps by mentioning it here, it'll prevent other people from making the same mistake as I did and perhaps preventing myself from doing the same mistake again. In the ZCatalog, when you set up indexes you can give them a name and an index attribute. If you omit the index attribute, it'll try to hook into the objects by the name of the index. For example, if you set the index to be title with no indexed attribute it'll fetch the title attribute of the objects it catalogs. But if you set the indexed attribute to be something like idx_getTitle you can do something like this in your class: def idx_getTitle(self): """ return title as we want it to be available in the ZCatalog """ return re.sub('<*.?>','', self.title) The same can not be done with indexes of type DateIndex. I don't know why it is so but perhaps there's a good explanation that I don't understand. If you use the ZMI it's clear that you can't add an indexed attribute but it doesn't stop you from adding one in pure python like you do with all other indexes: from ZPublisher.HTTPRequest import record zcatalog = self.MyCatalog indexes = zcatalog._catalog.indexes # this works extra = record() extra.indexed_attrs = 'getSearchableCountry' zcatalog.addIndex('country', 'FieldIndex', extra) # this does NOT work! extra = record() extra.indexed_attrs = 'getPublishDateHourless' zcatalog.addIndex('publish_date', 'DateIndex', extra) If you run this code, it'll even pick up the method getPublishDateHourless in the ZMI view of the catalog's indexes. But it's never run! By the way, what I wanted to achieve was to index the publish date of certain objects but only index them without the hour/minute/second bit. Because I couldn't have such a "proxy method" I instead searched on the index publish_date with a range like this: zcatalog = self.MyCatalog v = DateTime('2007/12/13') query = {'query':[v, v+1], 'range':'min:max'} return zcatalog.searchResults(publish_date=query) Follow @peterbe on Twitter
https://api.minimalcss.app/plog/dateindex-indexed-attributes
CC-MAIN-2020-16
refinedweb
344
56.76
Allows safe timed executions of scripts by adding elapsed time checks into loops (for, while) and at the start of closures and methods and throwing an exception if a timeout occurs. This is especially useful when executing foreign scripts that you do not have control over. Inject this transformation into a script that you want to timeout after a specified amount of time. Annotating anything in a script will cause for loops, while loops, methods, and closures to make an elapsed time check and throw a TimeoutException if the check yields true. The annotation by default will apply to any classes defined in the script as well. Annotating a class will cause (by default) all classes in the entire file ('Compilation Unit') to be enhanced. You can fine tune what is enhanced using the annotation parameters. Static methods and static fields are ignored. The following is sample usage of the annotation forcing the script to timeout after 5 minutes (300 seconds): import groovy.transform.TimedInterrupt import java.util.concurrent.TimeUnitThis sample script will be transformed at compile time to something that resembles this: @TimedInterrupt(value = 300L, unit = TimeUnit.SECONDS) class MyClass { def method() { println '...' } } import java.util.concurrent.TimeUnit import java.util.concurrent.TimeoutException public class MyClass { // XXXXXX below is a placeholder for a hashCode value at runtime final private long timedInterruptXXXXXX$expireTime final private java.util.Date timedInterruptXXXXXX$startTime public MyClass() { timedInterruptXXXXXX$expireTime = System.nanoTime() + TimeUnit.NANOSECONDS.convert(300, TimeUnit.SECONDS) timedInterruptXXXXXX$startTime = new java.util.Date() } public java.lang.Object method() { if (timedInterruptXXXXXX$expireTime < System.nanoTime()) { throw new TimeoutException('Execution timed out after 300 units. Start time: ' + timedInterruptXXXXXX$startTime) } return this.println('...') } }See the unit test for this class for additional examples. Set this to false if you have multiple classes within one source file and only want timeout checks on some of the classes (or you want different time constraints on different classes). Place an annotation with appropriate parameters on each class you want enhanced. Set to true (the default) for blanket coverage of timeout checks on all methods, loops and closures within all classes/script code. For even finer-grained control see applyToAllMembers. Set this to false if you have multiple methods/closures within a class or script and only want timeout checks on some of them (or you want different time constraints on different methods/closures). Place annotations with appropriate parameters on the methods/closures that you want enhanced. When false, applyToAllClasses is automatically set to false. Set to true (the default) for blanket coverage of timeout checks on all methods, loops and closures within the class/script. By default a time check is added to the start of all user-defined methods. To turn this off simply set this parameter to false. The type of exception thrown when timeout is reached. The TimeUnit of the value parameter. By default it is TimeUnit.SECONDS. The maximum elapsed time the script will be allowed to run for. By default it is measure in seconds
http://docs.groovy-lang.org/latest/html/gapi/groovy/transform/TimedInterrupt.html
CC-MAIN-2015-18
refinedweb
498
50.12
Hellow I'm new to this forum, and I like it alot It is just what I was looking for I'm new at c++ for now, I'm at input/output files and I have a problem I have to write a program that reads some numbers (type double) from a file and outputs the avarage of it "Read numbers from the input file in a loop, incrementing count and updating sum with each number read." How do I have to read ALL files from the file can someone tell me how the loop has to look like here is my code so far Code:#include <fstream> #include <iostream> #include <cstdlib> using namespace std; int main() { ifstream fin; int count = 0; double val; double sum = 0.0; fin.open("numbers02.txt"); //HERE HAS TO COME THE LOOP fin.close(); if (count > 0) { cout << "Average of the " << count << " numbers in the file is " << (sum/count) << "\n"; } else { cout << "No numbers were found"; } } I'm new with files ... that's why I have problems
https://cboard.cprogramming.com/cplusplus-programming/85004-help-input-output-files.html
CC-MAIN-2017-26
refinedweb
173
72.84
Converting an existing webservice to one using the JSR109 deployment model By manveen on Mar 27, 2007 This is my first blog, and in this, I am going to talk about how to develop a Web service and a client using JSR 109 programming model. Let's consider a scenario. You have an existing web based application running on glassfish, WSIT based web service (secure or non-secure). Now you want to see if you can convert what you have to (servlet based) JSR 109 based programming model. You obviously want to do this with a minimal effort (taking the path of least resistance). You wish there were simple steps that could help you do this. Well, you're in luck! Voila! Here are they: Step 1. Update your existing WebserviceImpl class (POJO) in the @WebService annotation section. The default SOAP binding is SOAP 1.1. So you might want to add an annotation (@javax.xml.ws.BindingType) to change the binding to SOAP 1.2 HTTP. Here's what the final result will look like- @javax.jws.WebService(endpointInterface="simple.server.IPingService", targetNamespace="", portName="A_IPingService", serviceName="PingService11", wsdlLocation="WEB-INF/wsdl/PingService.wsdl") @javax.xml.ws.BindingType( value=javax.xml.ws.soap.SOAPBinding.SOAP12HTTP_BINDING) public class PingImpl implements IPingService { ... } Step 2. You can safely remove sun-jaxws.xml from your .war package. If you want to use the defaults, you can remove web.xml as well. You need to package your war to be compliant with what JSR109 expects from you, which is- the wsdl and xsd's go under WEB-INF/wsdl, and all the classes go under WEB-INF/classes. Here's a sample ant target that shows that I'm talking about.... Target to bundle the war <target name="create-war-jsr109"> <property name="war.file" value="${build.war.home}/${wsdlcontext.name}.war"/> <delete file="${war.file}"/> <mkdir dir="${build.war.home}/temp"/> <mkdir dir="${build.war.home}/temp/WEB-INF"/> <mkdir dir="${build.war.home}/temp/WEB-INF/classes"/> <mkdir dir="${build.war.home}/temp/WEB-INF/wsdl"/> <copy todir="${build.war.home}/temp/WEB-INF/classes"> <fileset dir="${build.classes.home}"> <include name="\*\*/\*.class"/> </fileset> </copy> <copy todir="${build.war.home}/temp/WEB-INF/wsdl"> <fileset dir="${current.dir}/../../etc"> <include name="\*.wsdl"/> <include name="\*.xsd"/> </fileset> </copy> <echo message="Creating war file ${war.file}"/> <jar jarfile="${war.file}" basedir="${build.war.home}/temp" update="true" includes ="\*\*/\*"> </jar> <echo message="created war file ${war.file} at ${build.war.home}/temp"/> </target> Step 3. If you're not overriding the defaults with web.xml, then your service is deployed at /${yourServiceName in your wsdl}. Check where your service is actually deployed. Then change all your client wsdl and schema bindings to make sure you use the right ServiceName in all relevant places. You're all set! Was that easy? Is there a way you can make this even easier? Share your thoughts as comments to this blog. Thanks for reading.
https://blogs.oracle.com/manveen/tags/jsr109
CC-MAIN-2014-15
refinedweb
495
55
KCompletionBox #include <KCompletionBox> Detailed Description A helper widget for "completion-widgets" (KLineEdit, KComboBox)) A little utility class for "completion-widgets", like KLineEdit or KComboBox. KCompletionBox is a listbox, displayed as a rectangle without any window decoration, usually directly under the lineedit or combobox. It is filled with all possible matches for a completion, so the user can select the one he wants. It is used when KCompletion::CompletionMode == CompletionPopup or CompletionPopupAuto. Definition at line 36 of file kcompletionbox.h. Constructor & Destructor Documentation Constructs a KCompletionBox. The parent widget is used to give the focus back when pressing the up-button on the very first item. Definition at line 40 of file kcompletionbox.cpp. Destroys the box. Definition at line 82 of file kcompletionbox.cpp. Member Function Documentation Emitted when an item is selected, text is the text of the selected item. - Deprecated: - since 5.81, use the KCompletionBox::textActivated(const QString &) signal instead - Returns - true if selecting an item results in the emission of the selected() signal. This calculates the size of the dropdown and the relative position of the top left corner with respect to the parent widget. This matches the geometry and position normally used by K/QComboBox when used with one. Definition at line 370 of file kcompletionbox.cpp. - Returns - the text set via setCancelledText() or QString(). Moves the selection one line down or select the first item if nothing is selected yet. Definition at line 393 of file kcompletionbox.cpp. Moves the selection down to the last item. Definition at line 436 of file kcompletionbox.cpp. Reimplemented from QListWidget to get events from the viewport (to hide this widget on mouse-click, Escape-presses, etc. Reimplemented from QAbstractItemView. Definition at line 115 of file kcompletionbox.cpp. The preferred global coordinate at which the completion box's top left corner should be positioned. Definition at line 329 of file kcompletionbox.cpp. Moves the selection up to the first item. Definition at line 431 of file kcompletionbox.cpp. Inserts items into the box. Does not clear the items before. index determines at which position items will be inserted. (defaults to appending them at the end) Definition at line 490 of file kcompletionbox.cpp. - Returns - true if this widget is handling Tab-key events to traverse the items in the dropdown list, otherwise false. Default is true. - See also - setTabHandling Returns a list of all items currently in the box. Definition at line 88 of file kcompletionbox.cpp. Moves the selection one page down. Definition at line 421 of file kcompletionbox.cpp. Moves the selection one page up. Definition at line 426 of file kcompletionbox.cpp. Adjusts the size of the box to fit the width of the parent given in the constructor and pops it up at the most appropriate place, relative to the parent. Depending on the screensize and the position of the parent, this may be a different place, however the default is to pop it up and the lower left corner of the parent. Make sure to hide() the box when appropriate. Definition at line 273 of file kcompletionbox.cpp. This properly resizes and repositions the listbox. - Since - 5.0 Definition at line 291 of file kcompletionbox.cpp. Set whether or not the selected signal should be emitted when an item is selected. By default the selected() signal is emitted. - Parameters - Definition at line 562 of file kcompletionbox.cpp. Sets the text to be emitted if the user chooses not to pick from the available matches. If the cancelled text is not set through this function, the userCancelled signal will not be emitted. - See also - userCancelled( const QString& ) - Parameters - Definition at line 453 of file kcompletionbox.cpp. Clears the box and inserts items. Definition at line 499 of file kcompletionbox.cpp. Makes this widget (when visible) capture Tab-key events to traverse the items in the dropdown list (Tab goes down, Shift+Tab goes up). On by default, but should be turned off when used in combination with KUrlCompletion. When off, KLineEdit handles Tab itself, making it select the current item from the completion box, which is particularly useful when using KUrlCompletion. - See also - isTabHandling Definition at line 441 of file kcompletionbox.cpp. Reimplemented for internal reasons. API is unaffected. Call it only if you really need it (i.e. the widget was hidden before) to have better performance. Definition at line 338 of file kcompletionbox.cpp. - Deprecated: - since 5.0, use resizeAndReposition instead. Definition at line 216 of file kcompletionbox.h. Called when an item is activated. Emits KCompletionBox::textActivated(const QString &) with the item text. - Note - For releases <= 5.81, this slot emitted KCompletionBox::activated(const QString &) with the item text. Definition at line 101 of file kcompletionbox.cpp. Emitted when an item is selected, text is the text of the selected item. - Since - 5.81 Moves the selection one line up or select the first item if nothing is selected yet. Definition at line 407 of file kcompletionbox.cpp. Emitted whenever the user chooses to ignore the available selections and closes this box..
https://api.kde.org/frameworks/kcompletion/html/classKCompletionBox.html
CC-MAIN-2021-49
refinedweb
841
52.26
Next article I’ll post an update of our wave table oscillator, but first I’ll take the opportunity to discuss how I write code these days. Maybe it will help make sense of some of the choices in the code I post going forward. I tend to build all DSP units as inlines in header files True story: Recently, I moved an audio plug-in project I was developing on the Mac in Xcode, to Windows and Visual Studio. I was shocked to see that my source files had disappeared! There was only the main implementation cpp file (not counting the plugin framework), and my headers files. All files were backed up, of course, but it was still unsettling—what could have happened? Then it sank in—I’d written most of the good stuff in header files, so that outside of the plug-in framework, there was indeed only one cpp file—leveraging 28 header files. The main reason my DSP functions reside in header files is that I make my basic functions inline-able for speed. In a perfectly orderly world, that still might be a header file for the smallest and most critical functions, and a companion C++ source file (.cpp) for the rest. But it’s faster to code and make changes to a single file instead of bouncing between two. And I need only include the header where I use it, instead of also pulling in and marking companion cpp files for compilation. Further, I write “atomic” DSP components that handle basic functions, and build more complex components from these atomic functions. For instance, I have a delay line function, from which I make an allpass delay. Writing a reverb function can be very short and clear, combining these basic functions with other filter functions and feedback. And feedback is easy because all components process a sample at a time instead of blocks. Examples of my library files: OnePole.h, Biquad.h, StateVar.h, DelayLine.h, FIR.h, Noise.h, Gain.h, ADSR.h, WaveTableOsc.h. Note that “inline” is a request to the compiler. But compilers are pretty good about honoring it. Remember, inline matters most for small functions, where function call overhead is a bigger consideration. And small functions are easiest to inline, so there’s little reason for a compiler to not comply. If you’re concerned, just list and examine the preprocessor output of a source file to see it. By the way, the usual argument against inlines—“they lead to bloated code”—doesn’t apply much in the DSP context. These are not large functions used many places in your code. They are built for efficiency. The process routines are localized to your audio processing function, and the setting routines mostly in your plug-in’s parameter handling code. My DSP units are designed for individual samples, not blocks of samples Dedicated DSP chips usually process audio a sample at a time. But DSP run on a host computer’s process must handle audio a buffer at a time, to minimize the overhead of context switching. So, if you look at open source DSP libraries, you’ll see that many are written to operate on a buffer of samples. I don’t do that—my inline functions process a single sample at a time. Of course, you can easily wrap that in a for loop, perhaps partially unrolled to minimize loop overhead. The the next process that acts on the entire buffer, then the next. Or you can string them together, one after the other, to complete your entire algorithm one sample at a time, with an outer loop to iterate through the buffer. The former might work better do the caching advantages, at the expense of more loop overhead. But it’s easier to make this choice with single-sample processes than for a library that’s entirely optimized for buffer processing. I usually write my DSP units as templates Mainly, I template them to handle float or double. I use double when developing, but have the option of float available. Filters are a case in which I’ll always use double. For a wavetable oscillator, I want double parameters but float wavetables. A delay line element might be float or double depending on the need. I’d rather build the choice into the DSP unit than to run into a different need and have to take time to rewrite the DSP unit or make a new version of it. I tend to avoid posting templated stuff on my website, because it can be a distraction from what I’m trying to show. No virtual functions I don’t want a vtable. I’m not going to inherit from these basic DSP functions anyway, they are built to do one thing efficiently. Minor detail: I wrap each DSP header file in a namespace. (I use namespace ESP—which stand for “EarLevel Signal Processing”.) Then I can be lazy with my class names without concern of one day having a namespace collision issue (my “Biquad” versus another “Biquad” built into the plug-in library, for instance). Hi Ni! Just happened to catch your site right after you made a new post….that doesn’t happen often. 🙂 What? You aren’t writing everything in LISP? 🙂 Have a great week! Hi Eric! At least someone reads these things…Well, LISP is not an easy choice for real time DSP, as you probably know, and even tougher for wedging into constraints of things like audio processing plug-ins. I’m fully thankful to no longer be constrained to 56k assembly language, so C++ isn’t so bad 😉 Plus, I’m also pretty happy for C++11 and later improvements, and enjoying it more these days… I see the advantages of doing things this way. I’ve been working on my own DSP library and one of my goals was to make the components modular, so they could in theory be linked together at run time like a software modular. I haven’t figured out a way to do this without vtables though. Any ideas? And yes, someone is definitely reading your articles. Thanks for taking the time to write them and share your knowledge! I haven’t looked at run-time routine of DSP modules—perhaps a look at VCV Rack source code for ideas? I understand you inline for 2 reasons: (1)no function call overhead (2) you don’t need to think about which cpp files to add to your projects, as the primitives consist of inline functions only. No duplicate symbols complaints from your linker when using inline. I fail to get it… (1)this benefit is thrown completely out of the window with sample based processing, a technique comparable with driving to the shop 10 times when you want to fill your fridge with 10 items. (2)just throw all your cpp files into a library that you link in. The linker will happily throw out any unused code out of the executable. Yes your function prototypes are redundant, but it is a handy place to fully comment the usage of the API. Function prototypes (or classes) are a nice TOC/summary on itself. But i guess… probably I misunderstand the full context or goal/target of your code… Don’t like your analogy—if my atomic action is equivalent to placing a can of tuna into your cart, it does not imply driving to the store each time you want to do that… I agree the analogy is over-the-top, I should have used more nuance, I apologize. In my embedded DSP world adding loops at a higher level than where the action is, has the effect of frequent pipeline flushing and memory io stalling, doing away with the benefits of DSP (multiple MACS per cycle). Driving to the store would indeed be a lot worse even. PS about building upon primitives: that I fully endorse. It makes your code easy to maintain (limited areas of change) and smarter (abstraction is powerful), and enhances re-use and portability. On second thought… when you rely on optimizations done mostly by the compiler itself (as opposed to hand-optimizing), inlining can make sense… The compiler should be smart enough to replace calls to inlines by their code and then use optimisation techniques like loop unrolling, vector processing (SIMD operations), re-ordering, pipeline fill enhancing, branch prediction, io serializing, etc… on the higher level code. Thinking of it even more… this could be quite powerful when using multiple layers (inline functions calling inlines), the compiler has several degrees of freedom to optimize depending on the specific use of or order of primitives. I used to code with low level audio API on windows platform in the past, then VST plugin using JUCE, but recently getting more works using low cost micro controller in trying to develop cheap guitar effect pedal. Now I’m moving my codes from ARM Cortex M3 with 32-bit fix point math in C to single precision floating point math with ESP32-A1S in C++, also from sample processing block processing mode. I released my codes at github.com/hamuro80 , just wanna say thanks for sharing many useful tips and keep up a good work! Thanks for sharing! As a reminder of how fast time passes, I have a STM32F4DISCOVERY board from 2012, did some experimenting with it back then…
https://www.earlevel.com/main/2019/04/26/how-i-write-code/
CC-MAIN-2020-45
refinedweb
1,575
68.81
-- File created: 2008-10-10 13:29:26 {-# LANGUAGE CPP #-} module System.FilePath.Glob.Base ( Token(..), Pattern(..) , CompOptions(..), MatchOptions(..) , compDefault, compPosix, matchDefault, matchPosix , decompile , compile , compileWith, tryCompileWith , tokenize -- for tests , optimize , liftP, tokToLower ) where import Control.Arrow (first) import Control.Monad.Error (ErrorT, runErrorT, throwError) import Control.Monad.Writer.Strict (Writer, runWriter, tell) import Control.Exception (assert) import Data.Char (isDigit, isAlpha, toLower) import Data.List (find, sortBy) import Data.Maybe (fromMaybe) import Data.Monoid (Monoid, mappend, mempty, mconcat) import System.FilePath ( pathSeparator, extSeparator , isExtSeparator, isPathSeparator ) import System.FilePath.Glob.Utils ( dropLeadingZeroes , isLeft, fromLeft , increasingSeq , addToRange, overlap ) #if __GLASGOW_HASKELL__ import Text.Read (readPrec, lexP, parens, prec, Lexeme(Ident)) #endif data Token -- primitives = Literal !Char | ExtSeparator -- . | PathSeparator -- / | NonPathSeparator -- ? | CharRange !Bool [Either Char (Char,Char)] -- [] | OpenRange (Maybe String) (Maybe String) -- <> | AnyNonPathSeparator -- * | AnyDirectory -- **/ -- after optimization only | LongLiteral !Int String deriving (Eq) -- Note: CharRanges aren't converted, because this is tricky in general. -- Consider for instance [@-[], which includes the range A-Z. This would need -- to become [@[a-z]: so essentially we'd need to either: -- -- 1) Have a list of ranges of uppercase Unicode. Check if our range -- overlaps with any of them and if it does, take the non-overlapping -- part and combine it with the toLower of the overlapping part. -- -- 2) Simply expand the entire range to a list and map toLower over it. -- -- In either case we'd need to re-optimize the CharRange—we can't assume that -- if the uppercase characters are consecutive, so are the lowercase. -- -- 1) might be feasible if someone bothered to get the latest data. -- -- 2) obviously isn't since you might have 'Right (minBound, maxBound)' in -- there somewhere. -- -- The current solution is to just check both the toUpper of the character and -- the toLower. tokToLower :: Token -> Token tokToLower (Literal c) = Literal (toLower c) tokToLower (LongLiteral n s) = LongLiteral n (map toLower s) tokToLower tok = tok -- |An abstract data type representing a compiled pattern. -- -- Note that the 'Eq' instance cannot tell you whether two patterns behave in -- the same way; only whether they compile to the same 'Pattern'. For instance, -- @'compile' \"x\"@ and @'compile' \"[x]\"@ may or may not compare equal, -- though a @'match'@ will behave the exact same way no matter which 'Pattern' -- is used. newtype Pattern = Pattern { unPattern :: [Token] } deriving (Eq) liftP :: ([Token] -> [Token]) -> Pattern -> Pattern liftP f (Pattern pat) = Pattern (f pat) instance Show Token where show (Literal c) | c `elem` "*?[<" || isExtSeparator c = ['[',c,']'] | otherwise = assert (not $ isPathSeparator c) [c] show ExtSeparator = [ extSeparator] show PathSeparator = [pathSeparator] show" -- We have to be careful here with ^ and ! lest [a!b] become [!ab]. So we -- just put them at the end. -- -- Also, [^x-] was sorted and should not become [^-x]. show (CharRange b r) = let f = either (:[]) (\(x,y) -> [x,'-',y]) (caret,exclamation,fs) = foldr (\c (ca,ex,ss) -> case c of Left '^' -> ("^",ex,ss) Left '!' -> (ca,"!",ss) _ -> (ca, ex,(f c ++) . ss) ) ("", "", id) r (beg,rest) = let s' = fs [] (x,y) = splitAt 1 s' in if not b && x == "-" then (y,x) else (s',"") in concat [ "[" , if b then "" else "^" , beg, caret, exclamation, rest , "]" ] instance Show Pattern where showsPrec d p = showParen (d > 10) $ showString "compile " . showsPrec (d+1) (decompile p) instance Read Pattern where #if __GLASGOW_HASKELL__ readPrec = parens . prec 10 $ do Ident "compile" <- lexP fmap compile readPrec #else readsPrec d = readParen (d > 10) $ \r -> do ("compile",string) <- lex r (xs,rest) <- readsPrec (d+1) string [(compile xs, rest)] #endif instance Monoid Pattern where mempty = Pattern [] mappend (Pattern a) (Pattern b) = optimize . Pattern $ (a ++ b) mconcat = optimize . Pattern . concatMap unPattern -- |Options which can be passed to the 'tryCompileWith' or 'compileWith' -- functions: with these you can selectively toggle certain features at compile -- time. -- -- Note that some of these options depend on each other: classes can never -- occur if ranges aren't allowed, for instance. -- We could presumably put locale information in here, too. data CompOptions = CompOptions { characterClasses :: Bool -- ^Allow character classes, @[[:...:]]@. , characterRanges :: Bool -- ^Allow character ranges, @[...]@. , numberRanges :: Bool -- ^Allow open ranges, @\<...>@. , wildcards :: Bool -- ^Allow wildcards, @*@ and @?@. , recursiveWildcards :: Bool -- ^Allow recursive wildcards, @**/@. , pathSepInRanges :: Bool -- ^Allow path separators in character ranges. -- -- If true, @a[/]b@ never matches anything (since character ranges can't -- match path separators); if false and 'errorRecovery' is enabled, -- @a[/]b@ matches itself, i.e. a file named @]b@ in the subdirectory -- @a[@. , errorRecovery :: Bool -- ^If the input is invalid, recover by turning any invalid part into -- literals. For instance, with 'characterRanges' enabled, @[abc@ is an -- error by default (unclosed character range); with 'errorRecovery', the -- @[@ is turned into a literal match, as though 'characterRanges' were -- disabled. } deriving (Show,Read,Eq) -- |The default set of compilation options: closest to the behaviour of the -- @zsh@ shell, with 'errorRecovery' enabled. -- -- All options are enabled. compDefault :: CompOptions compDefault = CompOptions { characterClasses = True , characterRanges = True , numberRanges = True , wildcards = True , recursiveWildcards = True , pathSepInRanges = True , errorRecovery = True } -- |Options for POSIX-compliance, as described in @man 7 glob@. -- -- 'numberRanges', 'recursiveWildcards', and 'pathSepInRanges' are disabled. compPosix :: CompOptions compPosix = CompOptions { characterClasses = True , characterRanges = True , numberRanges = False , wildcards = True , recursiveWildcards = False , pathSepInRanges = False , errorRecovery = True } -- |Options which can be passed to the 'matchWith' or 'globDirWith' functions: -- with these you can selectively toggle certain features at matching time. data MatchOptions = MatchOptions { matchDotsImplicitly :: Bool -- ^Allow @*@, @?@, and @**/@ to match @.@ at the beginning of paths. , ignoreCase :: Bool -- ^Case-independent matching. , ignoreDotSlash :: Bool -- ^Treat @./@ as a no-op in both paths and patterns. -- -- (Of course e.g. @../@ means something different and will not be -- ignored.) } -- |The default set of execution options: closest to the behaviour of the @zsh@ -- shell. -- -- Currently identical to 'matchPosix'. matchDefault :: MatchOptions matchDefault = matchPosix -- |Options for POSIX-compliance, as described in @man 7 glob@. -- -- 'ignoreDotSlash' is enabled, the rest are disabled. matchPosix :: MatchOptions matchPosix = MatchOptions { matchDotsImplicitly = False , ignoreCase = False , ignoreDotSlash = True } -- |Decompiles a 'Pattern' object into its textual representation: essentially -- the inverse of 'compile'. -- -- Note, however, that due to internal optimization, @decompile . compile@ is -- not the identity function. Instead, @compile . decompile@ is. -- -- Be careful with 'CompOptions': 'decompile' always produces a 'String' which -- can be passed to 'compile' to get back the same 'Pattern'. @compileWith -- options . decompile@ is /not/ the identity function unless @options@ is -- 'compDefault'. decompile :: Pattern -> String decompile = concatMap show . unPattern ------------------------------------------ -- COMPILATION ------------------------------------------ -- |Compiles a glob pattern from its textual representation into a 'Pattern' -- object. -- --. Never matches path separators: @[\/]@ matches -- nothing at all. Named character classes can also be matched: -- @[:x:]@ within @[]@ specifies the class named @x@, which matches -- certain predefined characters. See below for a full list. -- -- [@[^..\]@ or @[!..\]@] Like @[..]@, but matches any character /not/ listed. -- Note that @[^-x]@ is not the inverse of @[-x]@, but -- the range @[^-x]@. -- -- [@\<m-n>@] Matches any integer in the range m to n, inclusive. The range may -- be open-ended by leaving out either number: @\"\<->\"@, for -- instance, matches any integer. -- -- [@**/@] Matches any number of characters, including path separators, -- excluding the empty string. -- -- Supported character classes: -- -- [@[:alnum:\]@] Equivalent to @\"0-9A-Za-z\"@. -- -- [@[:alpha:\]@] Equivalent to @\"A-Za-z\"@. -- -- [@[:blank:\]@] Equivalent to @\"\\t \"@. -- -- [@[:cntrl:\]@] Equivalent to @\"\\0-\\x1f\\x7f\"@. -- -- [@[:digit:\]@] Equivalent to @\"0-9\"@. -- -- [@[:graph:\]@] Equivalent to @\"!-~\"@. -- -- [@[:lower:\]@] Equivalent to @\"a-z\"@. -- -- [@[:print:\]@] Equivalent to @\" -~\"@. -- -- [@[:punct:\]@] Equivalent to @\"!-\/:-\@[-`{-~\"@. -- -- [@[:space:\]@] Equivalent to @\"\\t-\\r \"@. -- -- [@[:upper:\]@] Equivalent to @\"A-Z\"@. -- -- [@[:xdigit:\]@] Equivalent to @\"0-9A-Fa-f\"@. -- -- -- @\'\\\'@. -- -- Error recovery will be performed: erroneous operators will not be considered -- operators, but matched as literal strings. Such operators include: -- -- * An empty @[]@ or @[^]@ or @[!]@ -- -- * A @[@ or @\<@ without a matching @]@ or @>@ -- -- * A malformed @\<>@: e.g. nonnumeric characters or no hyphen -- -- So, e.g. @[]@ will match the string @\"[]\"@. compile :: String -> Pattern compile = compileWith compDefault -- |Like 'compile', but recognizes operators according to the given -- 'CompOptions' instead of the defaults. -- -- If an error occurs and 'errorRecovery' is disabled, 'error' will be called. compileWith :: CompOptions -> String -> Pattern compileWith opts = either error id . tryCompileWith opts -- |A safe version of 'compileWith'. -- -- If an error occurs and 'errorRecovery' is disabled, the error message will -- be returned in a 'Left'. tryCompileWith :: CompOptions -> String -> Either String Pattern tryCompileWith opts = fmap optimize . tokenize opts tokenize :: CompOptions -> String -> Either String Pattern tokenize opts = fmap Pattern . sequence . go where err _ c cs | errorRecovery opts = Right (Literal c) : go cs err s _ _ = [Left s] go :: String -> [Either String Token] go [] = [] go ('?':cs) | wcs = Right NonPathSeparator : go cs go ('*':cs) | wcs = case cs of '*':p:xs | rwcs && isPathSeparator p -> Right AnyDirectory : go xs _ -> Right AnyNonPathSeparator : go cs go ('[':cs) | crs = let (range,rest) = charRange opts cs in case range of Left s -> err s '[' cs r -> r : go rest go ('<':cs) | ors = let (range, rest) = break (=='>') cs in if null rest then err "compile :: unclosed <> in pattern" '<' cs else case openRange range of Left s -> err s '<' cs r -> r : go (tail rest) go (c:cs) | isPathSeparator c = Right PathSeparator : go cs | isExtSeparator c = Right ExtSeparator : go cs | otherwise = Right (Literal c) : go cs wcs = wildcards opts rwcs = recursiveWildcards opts crs = characterRanges opts ors = numberRanges opts -- type CharRange = [Either Char (Char,Char)] charRange :: CompOptions -> String -> (Either String Token, String) charRange opts zs = case zs of y:ys | y `elem` "^!" -> case ys of -- [!-#] is not the inverse of [-#], it is the range ! through -- # '-':']':xs -> (Right (CharRange False [Left '-']), xs) '-' :_ -> first (fmap (CharRange True )) (start zs) xs -> first (fmap (CharRange False)) (start xs) _ -> first (fmap (CharRange True )) (start zs) where start :: String -> (Either String CharRange, String) start (']':xs) = run $ char ']' xs start ('-':xs) = run $ char '-' xs start xs = run $ go xs run :: ErrorT String (Writer CharRange) String -> (Either String CharRange, String) run m = case runWriter.runErrorT $ m of (Left err, _) -> (Left err, []) (Right rest, cs) -> (Right cs, rest) go :: String -> ErrorT String (Writer CharRange) String go ('[':':':xs) | characterClasses opts = readClass xs go ( ']':xs) = return xs go ( c:xs) = if not (pathSepInRanges opts) && isPathSeparator c then throwError "compile :: path separator within []" else char c xs go [] = throwError "compile :: unclosed [] in pattern" char :: Char -> String -> ErrorT String (Writer CharRange) String char c ('-':x:xs) = if x == ']' then tell [Left c, Left '-'] >> return xs else tell [Right (c,x)] >> go xs char c xs = tell [Left c] >> go xs readClass :: String -> ErrorT String (Writer CharRange) String readClass xs = let (name,end) = span isAlpha xs in case end of ':':']':rest -> charClass name >> go rest _ -> tell [Left '[',Left ':'] >> go xs charClass :: String -> ErrorT String (Writer CharRange) () charClass name = -- The POSIX classes -- -- TODO: this is ASCII-only, not sure how this should be extended -- Unicode, or with a locale as input, or something else? case name of "alnum" -> tell [digit,upper,lower] "alpha" -> tell [upper,lower] "blank" -> tell blanks "cntrl" -> tell [Right ('\0','\x1f'), Left '\x7f'] "digit" -> tell [digit] "graph" -> tell [Right ('!','~')] "lower" -> tell [lower] "print" -> tell [Right (' ','~')] "punct" -> tell punct "space" -> tell spaces "upper" -> tell [upper] "xdigit" -> tell [digit, Right ('A','F'), Right ('a','f')] _ -> throwError ("compile :: unknown character class '" ++name++ "'") digit = Right ('0','9') upper = Right ('A','Z') lower = Right ('a','z') punct = map Right [('!','/'), (':','@'), ('[','`'), ('{','~')] blanks = [Left '\t', Left ' '] spaces = [Right ('\t','\r'), Left ' '] ------------------------------------------ -- OPTIMIZATION ------------------------------------------ optimize :: Pattern -> Pattern optimize = liftP (fin . go) where fin [] = [] -- Literals to LongLiteral -- Has to be done here: we can't backtrack in go, but some cases might -- result in consecutive Literals being generated. -- E.g. "a[b]". fin (x:y:xs) | isLiteral x && isLiteral y = let (ls,rest) = span isLiteral xs in fin $ LongLiteral (length ls + 2) (foldr (\(Literal a) -> (a:)) [] (x:y:ls)) : rest -- concatenate LongLiterals -- Has to be done here because LongLiterals are generated above. -- -- So one could say that we have one pass (go) which flattens everything as -- much as it can and one pass (fin) which concatenates what it can. fin (LongLiteral l1 s1 : LongLiteral l2 s2 : xs) = fin $ LongLiteral (l1+l2) (s1++s2) : xs fin (LongLiteral l s : Literal c : xs) = fin $ LongLiteral (l+1) (s++[c]) : xs fin (LongLiteral 1 s : xs) = Literal (head s) : fin xs fin (Literal c : LongLiteral l s : xs) = fin $ LongLiteral (l+1) (c:s) : xs fin (x:xs) = x : fin xs go [] = [] go (x@(CharRange _ _) : xs) = case optimizeCharRange x of x'@(CharRange _ _) -> x' : go xs x' -> go (x':xs) -- <a-a> -> a go (OpenRange (Just a) (Just b):xs) | a == b = LongLiteral (length a) a : go xs -- <a-b> -> [a-b] -- a and b are guaranteed non-null go (OpenRange (Just [a]) (Just [b]):xs) | b > a = go $ CharRange True [Right (a,b)] : xs go (x:xs) = case find ($ x) compressors of Just c -> let (compressed,ys) = span c xs in if null compressed then x : go ys else go (x : ys) Nothing -> x : go xs compressors = [isStar, isStarSlash, isAnyNumber] isLiteral (Literal _) = True isLiteral _ = False isStar AnyNonPathSeparator = True isStar _ = False isStarSlash AnyDirectory = True isStarSlash _ = False isAnyNumber (OpenRange Nothing Nothing) = True isAnyNumber _ = False optimizeCharRange :: Token -> Token optimizeCharRange (CharRange b_ rs) = fin b_ . go . sortCharRange $ rs where -- [/] is interesting, it actually matches nothing at all -- [.] can be Literalized though, just don't make it into an ExtSeparator so -- that it doesn't match a leading dot fin True [Left c] | not (isPathSeparator c) = Literal c fin True [Right r] | r == (minBound,maxBound) = NonPathSeparator fin b x = CharRange b x go [] = [] go (x@(Left c) : xs) = case xs of [] -> [x] y@(Left d) : ys -- [aaaaa] -> [a] | c == d -> go$ Left c : ys | d == succ c -> let (ls,rest) = span isLeft xs -- start from y (catable,others) = increasingSeq (map fromLeft ls) range = (c, head catable) in -- three (or more) Lefts make a Right if null catable || null (tail catable) then x : y : go ys -- [abcd] -> [a-d] else go$ Right range : map Left others ++ rest | otherwise -> x : go xs Right r : ys -> case addToRange r c of -- [da-c] -> [a-d] Just r' -> go$ Right r' : ys Nothing -> x : go xs go (x@(Right r) : xs) = case xs of [] -> [x] Left c : ys -> case addToRange r c of -- [a-cd] -> [a-d] Just r' -> go$ Right r' : ys Nothing -> x : go xs Right r' : ys -> case overlap r r' of -- [a-cb-d] -> [a-d] Just o -> go$ Right o : ys Nothing -> x : go xs optimizeCharRange _ = error "Glob.optimizeCharRange :: internal error" sortCharRange :: [Either Char (Char,Char)] -> [Either Char (Char,Char)] sortCharRange = sortBy cmp where cmp (Left a) (Left b) = compare a b cmp (Left a) (Right (b,_)) = compare a b cmp (Right (a,_)) (Left b) = compare a b cmp (Right (a,_)) (Right (b,_)) = compare a b
http://hackage.haskell.org/package/Glob-0.4/docs/src/System-FilePath-Glob-Base.html
CC-MAIN-2015-48
refinedweb
2,366
52.9
Walkthrough: Use SharePoint Full-Trust Workflow Activities with Business Connectivity Services This walkthrough topic shows how to create a full-trust activity for a workflow that uses Microsoft Business Connectivity Services (BCS). It addresses a basic expense approval scenario that uses a workflow to get the approver and safe limit values from the external system by using Business Connectivity Services. These values correspond to the given employee, specified by an employee ID, and expense type. Last modified: July 16, 2010 Applies to: SharePoint Server 2010 This walkthrough topic is based on the BCS Full-Trust Workflow Activity sample, which is part of the Microsoft SharePoint 2010 Software Development Kit (SDK). For more information about how to get this sample, see Code Sample: BCS Full-Trust Workflow Activity Sample. Full-trust activities are written in Microsoft Visual Studio and require the following: Installation in the global assembly cache. An .actions file placed in the workflow template folder in the file system of each server in the farm. An AuthorizedTypes entry included in the web.config file on each server to allow the assembly to execute. These have to be deployed on every farm server, and therefore require much higher permissions and scrutiny. Use full-trust workflow activities for Business Connectivity Services in the following cases: You have permission to deploy full-trust activities. You want to do any of the following: Read, create, and update data in an external system from a workflow with or without having an external list already provisioned. Make as few calls to the external system as possible. Use complex logic that requires looping over multiple items or associations to other external content types without using the Sandboxed Code Service. When using full-trust workflow activities, be aware of the following: Full-trust activities, as their name implies, run in full trust. This means that you should know exactly what the activity does and trust the author of the activity as though that author were an administrator before you deploy the activity. If you are using this activity in Microsoft SharePoint Designer, you must return a fixed number of items as SharePoint Designer does not support looping. The activity you will build in this how-to returns the approver and a safe limit for a given type of expense and employee ID. Information about the external content type is already built into the activity, and the activity will work only on that external content type. Creating the Workflow Activity This walkthrough accomplishes the same scenario as the one in How to: Creating Sandboxed Workflow Actions. However, here you are not working with an external list; you achieve the goals of the scenario solely by using the Business Data Connectivity (BDC) service APIs. This is slightly more complex than just reading an item in a list, because the BDC APIs offer much more flexibility than a flat list. Before looking at the code, you should understand the properties that are exposed by this activity. Although all of the activity's properties are exposed so that they can be edited in SharePoint Designer, some defaults are built in so that you do not have to enter the information for those that should be consistent across deployments. To understand what this sample does, look at the activity code in BDCReadActivity.cs. Notice that all of the code is contained in the Execute method, which is what the workflow host calls to execute the activity. In this call, the first BDC action to do is to get the Metadata Store that contains the SafeLimits external content type, as shown in the following code. The code might look simple but there are a few things to be aware of. First, the catalog is farm-scoped, not site-scoped. This means that this activity will work on any site that is using the same BDC service. Second, the SharePoint APIs used to get the Metadata Store will return null if the item is not found. This is different from the BDC APIs, which will throw exceptions if items are not found. Now that we have the Metadata Store, you have to find the external content type and return the correct data, as shown in the following method example. Notice in the code sample that GetEntity is in a try/catch block, so that if the external content type was deleted, or was not in this catalog, a MetadataObjectNotFoundException exception would be thrown. To handle this exception, you add an entry into the workflow history list to inform you of the exception, and then re-throw the exception, which would allow the workflow to fail. After you have an external content type, you must find a specific instance of that external content type. To do this, use the FindSpecific method of IEntity. This method takes two parameters. The first parameter is the Identity of the specific instance. In this example, EmployeeID was set as the identifier, so we need to pass the EmployeeID as the Identity. You can see at this point how the identity is specific to the external content type. Another external content type might have a different identifier with a different name and even a different type. The second parameter is the external system instance that we use to find the item. As stated before, because you are using SharePoint Designer, there is only one external system instance, and you use the name that is stored in the code for that. You should also notice that FindSpecific will throw a different exception if the item is not found: ObjectNotFoundException. Again, you want to catch this exception and return a friendly error to the user before re-throwing the exception so the workflow can fail. After you have the entity instance, the remaining work is simple. You can use the field names in the square brackets to get the values of the instance. The following are steps to create a workflow based on the BCS Full-Trust Workflow Activity sample. To set up the sample Follow the steps in Code Sample: BCS Full-Trust Workflow Activity Sample to download and install the BCS Full-Trust Workflow Activity sample. In SharePoint Designer, create an external content type named SafeLimits, with a namespace BCSBlog, and an external system instance named SafeLimit. Notice that these values are hardcoded in the sample activity code. Also, ensure the external content type has the following seven fields: EmployeeID, EquipmentLimit, EquipmentApprover, MoraleLimit, MoraleApprover, TravelLimit, and TravelApprover. (Optional) To verify the external content type, you can create an external list. Open the solution in Visual Studio 2010. Build the solution. This results in BCSReadActivity.dll which is deployed on the front-end web server's global assembly cache (%windir%\assembly) so that SharePoint Server can find and load the activity. Before you build the solution, ensure the defaults for EntityName, EntityNamespace, and LobSystemInstanceName are updated to match your setting. These are defined in the GetSafeLimits method. Deploy the solution. Add the assembly to the web.config file on each SharePoint farm server. Open the web.config file of your SharePoint site on the farm server. In the bottom section, look for <System.Workflow.ComponentModel.WorkflowCompiler>. Inside this tag, you should find an <authorizedTypes> section and an entry for each set of types. Add the following line (ensure that the type information is correct, including the public key token). Deploy the BCSReadActivity.ACTIONS file on each SharePoint farm server. On each farm server, copy BCSReadActivity.ACTIONS to the following folder in the SharePoint server install folder: %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\template\1033\workflow. Perform an iisreset command on each farm server. To create the workflow Create a document library for expense reports and name it ExpenseReports. Add the following columns: EmployeeID ExpenseType as a choice field with the following choices: Morale, Equipment, Travel Approver Limit In SharePoint Designer, in the ExpenseReports document library, click New List Workflow. Name the workflow and provide a description, and then click OK. Insert the Get Safe Limits for Employee action. Leave the first two parameters as they are. These are the output variables from the function for Approver and SafeLimit. For the third parameter, specify the Category. This is already defined on your list, so as before, pick the ExpenseType column by using the CurrentItem by using the function (fx) button. For the last parameter, pick the EmployeeID column using the CurrentItem by using the function (fx) button. You should now see a new list workflow as shown in Figure 1.Figure 1. New List workflow in Microsoft SharePoint Designer Add another action for writing the output variables from the function for Approver and SafeLimit. Insert the Update List Item action. Click the this list link. Click Add, and then select Approver. In the Lookup for Single Line of Text dialog box, click the fx button. For Data Source, select Workflow: Variables and Parameters. For Field from Source, select Variable: Approver. Close the dialog box. Repeat steps c through g to add the variable for SafeLimit. Save the workflow and then publish it. To run the workflow Populate your external system with data. Create one item in the ExpenseReports document library. Ensure that the employee ID and expense type values that you added exist in the external system. Trigger the workflow. The Approver and SafeLimit values fields in the document library entry should be populated by the workflow with values from the external system.
http://msdn.microsoft.com/en-us/library/ff806156(v=office.14)
CC-MAIN-2014-23
refinedweb
1,574
55.54
Overview the evaluation metrics such as accuracy. This guide describes how to use the experimental Keras mixed precision API to speed up your models. Using this API can run operations faster in the 16-bit dtypes, as they have specialized hardware to run 16-bit computations and 16-bit dtypes can be read from memory faster. NVIDIA GPUs can run operations in float16 faster than in float32, and TPUs can run operations in bfloat16 faster than float32. Therefore, these lower-precision dtypes should be used whenever possible on those devices. However, variables and a few computations should still be in float32 for numeric reasons so that the model trains to the same quality. The Keras mixed precision API allows you to use a mix of either float16 or bfloat16 with float32, to get the performance benefits from float16/bfloat16 and the numeric stability benefits from float32. Setup The Keras mixed precision API is available in TensorFlow 2.1. import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.mixed_precision import experimental as mixed_precision Supported hardware While mixed precision will run on most hardware, it will only speed up models on recent NVIDIA GPUs and Cloud Titan V, and the V100. You can check your GPU type with the following. The command only exists if the NVIDIA drivers are installed, so the following will raise an error otherwise. nvidia-smi -L GPU 0: Tesla V100-SXM2-16GB (UUID: GPU-d6f173e0-686f-951f-617d-90ff2d5bdc9d) All Cloud TPUs support bfloat16. Even on CPUs and older GPUs, where no speedup is expected, mixed precision APIs can still be used for unit testing, debugging, or just to try out the API. Setting the dtype policy To use mixed precision in Keras, you need to create a tf.keras.mixed_precision.experimental.Policy, typically referred to as a dtype policy. Dtype policies specify the dtypes layers will run in. In this guide, you will construct a policy from the string 'mixed_float16' and set it as the global policy. This will will cause subsequently created layers to use mixed precision with a mix of float16 and float32. policy = mixed_precision.Policy('mixed_float16') mixed_precision.set_policy(policy) The policy specifies two important aspects of a layer: the dtype the layer's computations are done in, and the dtype of a layer's variables. Above, you created a mixed_float16 policy (i.e., a mixed_precision.Policy created by passing the string 'mixed_float16' to its constructor). With this policy, layers use float16 computations and float32 variables. Computations are done in float16 for performance, but variables must be kept in float32 for numeric stability. You can directly query these properties of the policy. print('Compute dtype: %s' % policy.compute_dtype) print('Variable dtype: %s' % policy.variable_dtype) Compute dtype: float16 Variable dtype: float32 As mentioned before, the mixed_float16 policy will most significantly improve performance on NVIDIA GPUs with compute capability of at least 7.0. The policy will run on other GPUs and CPUs but may not improve performance. For TPUs, the mixed_bfloat16 policy should be used instead. Building the model Next, let's start building a simple model. Very small toy models typically do not benefit from mixed precision, because overhead from the TensorFlow runtime typically dominates the execution time, making any performance improvement on the GPU negligible. Therefore, let's build two large Dense layers with 4096 units each if a GPU is used. inputs = keras.Input(shape=(784,), name='digits') if tf.config.list_physical_devices('GPU'): print('The model will run with 4096 units on a GPU') num_units = 4096 else: # Use fewer units on CPUs so the model finishes in a reasonable amount of time print('The model will run with 64 units on a CPU') num_units = 64 dense1 = layers.Dense(num_units, activation='relu', name='dense_1') x = dense1(inputs) dense2 = layers.Dense(num_units, activation='relu', name='dense_2') x = dense2(x) The model will run with 4096 units on a GPU Each layer has a policy and uses the global policy by default. Each of the Dense layers therefore have the mixed_float16 policy because you set the global policy to mixed_float16 previously. This will cause the dense layers to do float16 computations and have float32 variables. They cast their inputs to float16 in order to do float16 computations, which causes their outputs to be float16 as a result. Their variables are float32 and will be cast to float16 when the layers are called to avoid errors from dtype mismatches. print('x.dtype: %s' % x.dtype.name) # 'kernel' is dense1's variable print('dense1.kernel.dtype: %s' % dense1.kernel.dtype.name) x.dtype: float16 dense1.kernel.dtype: float32 Next, create the output predictions. Normally, you can create the output predictions as follows, but this is not always numerically stable with float16. # INCORRECT: softmax and model output will be float16, when it should be float32 outputs = layers.Dense(10, activation='softmax', name='predictions')(x) print('Outputs dtype: %s' % outputs.dtype.name) Outputs dtype: float16 A softmax activation at the end of the model should be float32. Because the dtype policy is mixed_float16, the softmax activation would normally have a float16 compute dtype and output a float16 tensors. This can be fixed by separating the Dense and softmax layers, and by passing dtype='float32' to the softmax layer # CORRECT: softmax and model output are float32 x = layers.Dense(10, name='dense_logits')(x) outputs = layers.Activation('softmax', dtype='float32', name='predictions')(x) print('Outputs dtype: %s' % outputs.dtype.name) Outputs dtype: float32 Passing dtype='float32' to the softmax layer constructor overrides the layer's dtype policy to be the float32 policy, which does computations and keeps variables in float32. Equivalently, we could have instead passed dtype=mixed_precision.Policy('float32'); layers always convert the dtype argument to a policy. Because the Activation layer has no variables, the policy's variable dtype is ignored, but the policy's compute dtype of float32 causes softmax and the model output to be float32. Adding a float16 softmax in the middle of a model is fine, but a softmax at the end of the model should be in float32. The reason is that if the intermediate tensor flowing from the softmax to the loss is float16 or bfloat16, numeric issues may occur. You can override the dtype of any layer to be float32 by passing dtype='float32' if you think it will not be numerically stable with float16 computations. But typically, this is only necessary on the last layer of the model, as most layers have sufficient precision with mixed_float16 and mixed_bfloat16. Even if the model does not end in a softmax, the outputs should still be float32. While unnecessary for this specific model, the model outputs can be cast to float32 with the following: # The linear activation is an identity function. So this simply casts 'outputs' # to float32. In this particular case, 'outputs' is already float32 so this is a # no-op. outputs = layers.Activation('linear', dtype='float32')(outputs) Next, finish and compile the model, and generate input data. model = keras.Model(inputs=inputs, outputs=outputs) model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.RMSprop(), metrics=['accuracy']) (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype('float32') / 255 x_test = x_test.reshape(10000, 784).astype('float32') / 255 Downloading data from 11493376/11490434 [==============================] - 0s 0us/step This example cast the input data from int8 to float32. We don't cast to float16 since the division by 255 is on the CPU, which runs float16 operations slower than float32 operations. In this case, the performance difference in negligible, but in general you should run input processing math in float32 if it runs on the CPU. The first layer of the model will cast the inputs to float16, as each layer casts floating-point inputs to its compute dtype. The initial weights of the model are retrieved. This will allow training from scratch again by loading the weights. initial_weights = model.get_weights() Training the model with Model.fit Next, train the model. history = model.fit(x_train, y_train, batch_size=8192, epochs=5, validation_split=0.2) test_scores = model.evaluate(x_test, y_test, verbose=2) print('Test loss:', test_scores[0]) print('Test accuracy:', test_scores[1]) Epoch 1/5 6/6 [==============================] - 0s 60ms/step - loss: 4.6326 - accuracy: 0.4004 - val_loss: 0.6983 - val_accuracy: 0.8321 Epoch 2/5 6/6 [==============================] - 0s 28ms/step - loss: 0.6883 - accuracy: 0.7930 - val_loss: 0.3555 - val_accuracy: 0.8888 Epoch 3/5 6/6 [==============================] - 0s 27ms/step - loss: 0.3765 - accuracy: 0.8769 - val_loss: 0.4488 - val_accuracy: 0.8440 Epoch 4/5 6/6 [==============================] - 0s 27ms/step - loss: 0.2783 - accuracy: 0.9153 - val_loss: 0.1869 - val_accuracy: 0.9434 Epoch 5/5 6/6 [==============================] - 0s 27ms/step - loss: 0.3617 - accuracy: 0.8787 - val_loss: 0.3649 - val_accuracy: 0.8794 313/313 - 1s - loss: 0.3815 - accuracy: 0.8707 Test loss: 0.38147488236427307 Test accuracy: 0.8707000017166138 Notice the model prints the time per sample in the logs: for example, "4us/sample". The first epoch may be slower as TensorFlow spends some time optimizing the model, but afterwards the time per sample should stabilize. If you are running this guide in Colab, you can compare the performance of mixed precision with float32. To do so, change the policy from mixed_float16 to float32 in the "Setting the dtype policy" section, then rerun all the cells up to this point. On GPUs with at least compute capability 7.0, you should see the time per sample significantly increase, indicating mixed precision sped up the model. For example, with a Titan V GPU, the per-sample time increases from 4us to 12us. Make sure to change the policy back to mixed_float16 and rerun the cells before continuing with the guide. For many real-world models, mixed precision also allows you to double the batch size without running out of memory, as float16 tensors take half the memory. This does not apply however to this toy model, as you can likely run the model in any dtype where each batch consists of the entire MNIST dataset of 60,000 images. If running mixed precision on a TPU, you will not see as much of a performance gain compared to running mixed precision on GPUs. This is because TPUs already do certain ops in bfloat16 under the hood even with the default dtype policy of float32. TPU hardware does not support float32 for certain ops which are numerically stable in bfloat16, such as matmul. For such ops the TPU backend will silently use bfloat16 internally instead. As a consequence, passing dtype='float32' to layers which use such ops may have no numerical effect, however it is unlikely running such layers with bfloat16 computations will be harmful. Loss scaling Loss scaling is a technique which tf.keras.Model.fit automatically performs with the mixed_float16 policy to avoid numeric underflow. This section describes loss scaling and how to customize its behavior. Underflow and Overflow The float16 data type has a narrow dynamic range compared to float32. This means values above $65504$ will overflow to infinity and values below $6.0 \times 10^{-8}$ will underflow to zero. float32 and bfloat16 have a much higher dynamic range so that overflow and underflow are not a problem. For example: x = tf.constant(256, dtype='float16') (x ** 2).numpy() # Overflow inf x = tf.constant(1e-5, dtype='float16') (x ** 2).numpy() # Underflow 0.0 In practice, overflow with float16 rarely occurs. Additionally, underflow also rarely occurs during the forward pass. However, during the backward pass, gradients can underflow to zero. Loss scaling is a technique to prevent this underflow. Loss scaling background The basic concept of loss scaling is simple: simply multiply the loss by some large number, say $1024$. We call this number the loss scale. This will cause the gradients to scale by $1024$ as well, greatly reducing the chance of underflow. Once the final gradients are computed, divide them by $1024$ to bring them back to their correct values. The pseudocode for this process is: loss_scale = 1024 loss = model(inputs) loss *= loss_scale # We assume `grads` are float32. We do not want to divide float16 gradients grads = compute_gradient(loss, model.trainable_variables) grads /= loss_scale Choosing a loss scale can be tricky. If the loss scale is too low, gradients may still underflow to zero. If too high, the opposite the problem occurs: the gradients may overflow to infinity. To solve this, TensorFlow dynamically determines the loss scale so you do not have to choose one manually. If you use tf.keras.Model.fit, loss scaling is done for you so you do not have to do any extra work. This is explained further in the next section. Choosing the loss scale Each dtype policy optionally has an associated tf.mixed_precision.experimental.LossScale object, which represents a fixed or dynamic loss scale. By default, the loss scale for the mixed_float16 policy is a tf.mixed_precision.experimental.DynamicLossScale, which dynamically determines the loss scale value. Other policies do not have a loss scale by default, as it is only necessary when float16 is used. You can query the loss scale of the policy: loss_scale = policy.loss_scale print('Loss scale: %s' % loss_scale) Loss scale: DynamicLossScale(current_loss_scale=32768.0, num_good_steps=30, initial_loss_scale=32768.0, increment_period=2000, multiplier=2.0) The loss scale prints a lot of internal state, but you can ignore it. The most important part is the current_loss_scale part, which shows the loss scale's current value. You can instead use a static loss scale by passing a number when constructing a dtype policy. new_policy = mixed_precision.Policy('mixed_float16', loss_scale=1024) print(new_policy.loss_scale) FixedLossScale(1024.0) The dtype policy constructor always converts the loss scale to a LossScale object. In this case, it's converted to a tf.mixed_precision.experimental.FixedLossScale, the only other LossScale subclass other than DynamicLossScale. Models, like layers, each have a dtype policy. If present, a model uses its policy's loss scale to apply loss scaling in the tf.keras.Model.fit method. This means if Model.fit is used, you do not have to worry about loss scaling at all: The mixed_float16 policy will have a dynamic loss scale by default, and Model.fit will apply it. With custom training loops, the model will ignore the policy's loss scale, and you will have to apply it manually. This is explained in the next section. Training the model with a custom training loop So far, you trained a Keras model with mixed precision using tf.keras.Model.fit. Next, you will use mixed precision with a custom training loop. If you do not already know what a custom training loop is, please read the Custom training guide first. Running a custom training loop with mixed precision requires two changes over running it in float32: - Build the model with mixed precision (you already did this) - Explicitly use loss scaling if mixed_float16is used. For step (2), you will use the tf.keras.mixed_precision.experimental.LossScaleOptimizer class, which wraps an optimizer and applies loss scaling. It takes two arguments: the optimizer and the loss scale. Construct one as follows to use a dynamic loss scale optimizer = keras.optimizers.RMSprop() optimizer = mixed_precision.LossScaleOptimizer(optimizer, loss_scale='dynamic') Passing 'dynamic' is equivalent to passing tf.mixed_precision.experimental.DynamicLossScale(). Next, define the loss object and the tf.data.Datasets. loss_object = tf.keras.losses.SparseCategoricalCrossentropy() train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train)) .shuffle(10000).batch(8192)) test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(8192) Next, define the training step function. Two new methods from the loss scale optimizer are used in order to scale the loss and unscale the gradients: get_scaled_loss(loss): Multiplies the loss by the loss scale get_unscaled_gradients(gradients): Takes in a list of scaled gradients as inputs, and divides each one by the loss scale to unscale them These functions must be used in order to prevent underflow in the gradients. LossScaleOptimizer.apply_gradients will then apply gradients if none of them have Infs or NaNs. It will also update the loss scale, halving it if the gradients had Infs or NaNs and potentially increasing it otherwise. @tf.function def train_step(x, y): with tf.GradientTape() as tape: predictions = model(x) loss = loss_object(y, predictions) scaled_loss = optimizer.get_scaled_loss(loss) scaled_gradients = tape.gradient(scaled_loss, model.trainable_variables) gradients = optimizer.get_unscaled_gradients(scaled_gradients) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) return loss The LossScaleOptimizer will likely skip the first few steps at the start of training. The loss scale starts out high so that the optimal loss scale can quickly be determined. After a few steps, the loss scale will stabilize and very few steps will be skipped. This process happens automatically and does not affect training quality. Now define the test step. @tf.function def test_step(x): return model(x, training=False) Load the initial weights of the model, so you can retrain from scratch. model.set_weights(initial_weights) Finally, run the custom training loop. for epoch in range(5): epoch_loss_avg = tf.keras.metrics.Mean() test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='test_accuracy') for x, y in train_dataset: loss = train_step(x, y) epoch_loss_avg(loss) for x, y in test_dataset: predictions = test_step(x) test_accuracy.update_state(y, predictions) print('Epoch {}: loss={}, test accuracy={}'.format(epoch, epoch_loss_avg.result(), test_accuracy.result())) Epoch 0: loss=3.498969554901123, test accuracy=0.7682999968528748 Epoch 1: loss=0.5141929388046265, test accuracy=0.8604000210762024 Epoch 2: loss=0.3501259684562683, test accuracy=0.9164999723434448 Epoch 3: loss=0.24447472393512726, test accuracy=0.9133999943733215 Epoch 4: loss=0.2905025780200958, test accuracy=0.9549999833106995 GPU performance tips Here are some performance tips when using mixed precision on GPUs. Increasing your batch size If it doesn't affect model quality, try running with double the batch size when using mixed precision. As float16 tensors use half the memory, this often allows you to double your batch size without running out of memory. Increasing batch size typically increases training throughput, i.e. the training elements per second your model can run on. Ensuring GPU Tensor Cores are used As mentioned previously, modern NVIDIA GPUs use a special hardware unit called Tensor Cores that can multiply float16 matrices very quickly. However, Tensor Cores requires certain dimensions of tensors to be a multiple of 8. In the examples below, an argument is bold if and only if it needs to be a multiple of 8 for Tensor Cores to be used. - tf.keras.layers.Dense(units=64) - tf.keras.layers.Conv2d(filters=48, kernel_size=7, stride=3) - And similarly for other convolutional layers, such as tf.keras.layers.Conv3d - tf.keras.layers.LSTM(units=64) - And similar for other RNNs, such as tf.keras.layers.GRU - tf.keras.Model.fit(epochs=2, batch_size=128) You should try to use Tensor Cores when possible. If you want to learn more NVIDIA deep learning performance guide describes the exact requirements for using Tensor Cores as well as other Tensor Core-related performance information. XLA XLA is a compiler that can further increase mixed precision performance, as well as float32 performance to a lesser extent. See the XLA guide for details. Cloud TPU performance tips As on GPUs, you should try doubling your batch size, as bfloat16 tensors use half the memory. Doubling batch size may increase training throughput. TPUs do not require any other mixed precision-specific tuning to get optimal performance. TPUs already require the use of XLA. They benefit from having certain dimensions being multiples of $128$, but this applies equally to float32 as it does for mixed precision. See the Cloud TPU Performance Guide for general TPU performance tips, which apply to mixed precision as well as float32. Summary - You should use mixed precision if you use TPUs or NVIDIA GPUs with at least compute capability 7.0, as it will improve performance by up to 3x. - You can use mixed precision with the following lines: # On TPUs, use 'mixed_bfloat16' instead policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16') mixed_precision.set_policy(policy) - If your model ends in softmax, make sure it is float32. And regardless of what your model ends in, make sure the output is float32. - If you use a custom training loop with mixed_float16, in addition to the above lines, you need to wrap your optimizer with a tf.keras.mixed_precision.experimental.LossScaleOptimizer. Then call optimizer.get_scaled_lossto scale the loss, and optimizer.get_unscaled_gradientsto unscale the gradients. - Double the training batch size if it does not reduce evaluation accuracy - On GPUs, ensure most tensor dimensions are a multiple of $8$ to maximize performance For more examples of mixed precision using the tf.keras.mixed_precision API, see the official models repository. Most official models, such as ResNet and Transformer will run using mixed precision by passing --dtype=fp16.
https://tensorflow.google.cn/guide/keras/mixed_precision
CC-MAIN-2020-24
refinedweb
3,429
50.53
iOS Frameworks are a mechanism for packaging classes that's easy to use and distribute. Apple provides a number of its own Frameworks—such as UIKit, Foundation, and CoreData—that all iOS developers will be familiar with. These shared resources can be static or dynamic libraries that, when incorporated into your application, provide expanded functionality. They improve cross-project code reuse, and they're the preferred delivery method for third-party libraries. In this article, we'll cover creating an iOS framework from our custom checkbox control. Static vs. Dynamic iOS Frameworks There's a long-running computer science debate on the benefits of static vs. dynamic libraries. Static iOS frameworks are composed of static libraries that are linked at compile time and do not change (hence the name). On the other hand, dynamic frameworks (also referred to as embedded frameworks on iOS) are composed of dynamic units of code that are linked at runtime and thus can change. For a long time, static frameworks were the only option for iOS developers, but since Xcode 6, developers targeting iOS 8 or greater finally have the ability to create dynamic frameworks. Both types can bring different advantages inherently, but on iOS, some factors will make the framework decision for you. For instance, if you're interested in building a framework using Swift, dynamic frameworks are the only option. On the other hand, if you're concerned about broad compatibility—like supporting pre-iOS 8 devices—then static frameworks are your best option. (Here's a brief summary of static versus dynamic libraries.) Before we get too far along, you might want to read my blog about the custom checkbox control example, as it's the basis for the examples for the rest of this article. Creating a Static iOS Framework The process of creating a static iOS framework is quite involved, with a lot of steps. Here's a high-level view: First, we must create a static library. Then, we'll create an aggregate target that will create binaries for all of the architectures we'll need to support. - Finally, we'll be able to package these files into our framework. Let's begin with creating a Cocoa Touch Static Library project. Static Library Project Template The project will generically be named Checkbox. After we've taken this step, we can import in. Aggregate Project Template two options: type your own script or specify the location of a script. Build Script Entry. Derived Data Folder in Finder Now, we've reached the point where we can add a framework target to our project. The framework target is dependent upon the universal library target we just created, and it will encapsulate all necessary binaries and corresponding headers. Select the root item of our project, and add a new Target from the File menu -> New -> Target. Choose the Cocoa Touch Framework template. We'll append the work Kit to our control name by our convention and call the target CheckboxKit. Next we'll make some changes to the Build Settings tab. There are a number of changes here. Architectures Change the Build Active Architectures Only option to "No" for both Debug and Release Build Locations For Per-configuration Build Products Path, replace the $(EFFECTIVE_PLATFORM_NAME) environment variable with the literal -framework - For Per-configuration Intermediate Build Files Path, replace the $(EFFECTIVE_PLATFORM_NAME) environment variable with the literal -framework Deployment Change the Strip Style from "Debugging Symbols" to "Non-Global Symbols" Linking* Change Dead Code Stripping to "No" - Change Macho-O Type from "Dynamic Library" to "Static Library" Now, we'll need to navigate to the Build Phases tab and expand the Target Dependencies node. Click on the + sign to add a new dependency on the UniversalLib target. Target Dependencies Next, expand the headers node, and expand the child node named Public. We'll rather than the Public node. You can simply drag it into the Public node to correct this. Stay in the Build Phases tab and click the top + sign above the Target depenencies node. Choose New Run Script Phase. Expand the Run Script Node and add the following script: UNIVERSAL\_LIBRARY\_PATH=${UNIVERSAL\_LIBRARY\_DIR}/lib${PROJECT_NAME}.a FRAMEWORK\_DIR=${BUILD\_DIR}/${CONFIGURATION}-framework FRAMEWORK\_PATH=${FRAMEWORK\_DIR}/${PRODUCT_NAME}.framework # Copy fat library from UniversalLib target as PRODUCT_NAME. cp "${UNIVERSAL\_LIBRARY\_PATH}" "${FRAMEWORK\_PATH}/${PRODUCT\_NAME}" Open framework folder for convenience open "FRAMEWORK_DIR" We'll also add an import statement to the CheckboxKit.h header so that a user can import the control via a single import statement. In this case: #import <CheckboxKit/CheckboxKit.h> Since our framework is relatively simple, this may seem unnecessary. For more complicated frameworks that contain many headers, requiring a single import makes the frameworks much easier to use. The final version should look as follows: import <UIKit/UIKit.h> //! Project version number for CheckboxKit. FOUNDATION_EXPORT double CheckboxKitVersionNumber; //! Project version string for CheckboxKit. FOUNDATION_EXPORT const unsigned char CheckboxKitVersionString[]; #import <CheckboxKit/Checkbox.h> Finally, we build the project for our framework target to generate the CheckboxKit framework. The containing folder should automatically open as the last step of the script, and the framework will appear in our Derived Data directory: Framework Location in Derived Data Our static iOS framework is done! Use this as you would any other third-party framework: add it under to the General -> Linked Frameworks and Libraries tab in your project settings. To access the controls in code, you'll simply need to add import statements for any necessary headers Creating a Dynamic iOS Framework Apple's mechanism for creating dynamic frameworks is a little more straightforward. For this type we can skip directly to creating a project using the Cocoa Touch Framework template and append the word Kit to the control name (CheckboxKit). Framework Project Template We'll once again add the header and implementation files for the custom checkbox control to this project. We'll also need to add an import statement for our control headers to the CheckboxKit.h file so that is appears as follows: import <UIKit/UIKit.h> //! Project version number for CheckboxKit. FOUNDATION_EXPORT double CheckboxKitVersionNumber; //! Project version string for CheckboxKit. FOUNDATION_EXPORT const unsigned char CheckboxKitVersionString[]; #import <CheckboxKit/Checkbox.h> In the Build Phases tab, expand the headers node; then expand the child node name Public. We'll need to, not the Public node. Drag it into the Public node to correct this. If we build this project, we can already generate a dynamic framework in our Derived Data directory, though this framework is not universal (since we're only generating it against the iOS simulator at this point). We'll need to create a universal "fat" library so our framework can run on both the simulator and the device. Once again, we'll add another target to our project using the Aggregate template and call it UniversalLib. We'll add a script to the UniversalLib target that handles both building and merging the various architecture binaries. Navigate to the Build Phases tab, click the small + sign, and choose the New Run Script Phase option. Build Script Entry Next we'll add a script in the Run Script tab. Copy the following script into Xcode: #! Copy the framework structure to the universal folder cp -R "${BUILD\_DIR}/${CONFIGURATION}-iphoneos/${PROJECT\_NAME}.framework" "${UNIVERSAL_OUTPUTFOLDER}/"}" Open the Universal output folder for convenience open "${UNIVERSAL_OUTPUTFOLDER}" At this point, we need to make sure we have the same setting for Build Active Architectures across our Project and Targets. Under the Architectures tab, change the Build Active Architectures Only option to No for both Debug and Release in all of targets and the project. Now we're able to build our project for the aggregate UnversalLib target. The containing folder should automatically open as the last step of the script, and the framework will appear in our Derived Data directory. We've created our dynamic iOS framework! You'll need to add the Framework to the project under the General > Embedded Binaries tab in your project settings (which should automatically add it to Linked Frameworks and Libraries). To access the controls in code, you'll simply add import statements for any necessary headers. iOS Framework Wrap-Up The topic of iOS frameworks is deeply nuanced, and with Apple's addition of dynamic embedded frameworks, the subject has become even more complex. Ultimately, embedded frameworks will be the future since they're the only option for Swift, but static frameworks will also remain in use for supporting older versions of iOS. In the next part of this series we'll shift gears to examine how we can connect our native control to Xamarin via a binding library. Read more about custom controls. Build cross-platform native mobile apps in less time Download the latest version of ComponentOne Studio Enterprise Build cross-platform native mobile apps in less time Download the latest version of ComponentOne Studio EnterpriseDownload Now!
https://www.grapecity.com/blogs/how-to-create-an-ios-framework-for-a-custom-control
CC-MAIN-2021-49
refinedweb
1,482
54.93
Hello Mark, On Tue, Oct 05, 2004 at 02:47:22PM +0100, Mark Fortescue wrote: > texinfo-4.7 does not cross compile thank you again for your patch. I thought about the problem and had some ideas. Then I looked at your patch; it seems it is very close to my ideas. I think it needs some more work and then could be integrated to the main tree. (I hope to convince Karl.) If you are willing to work on it, here are some suggestions: 1) I think that the configure script could be switched to a ``tools only mode'' by an external variable, say TEXINFO_CONFIGURE_TOOLS. configure.ac would contain AM_CONDITIONAL(TOOLS_ONLY, [[test "$TEXINFO_CONFIGURE_TOOLS" ]]) 2) I think the call to secondary configure should move from Makefile to configure.ac. (Secondary configure must be called with eg. TEXINFO_CONFIGURE_TOOLS=1 , as mentioned above.) 3) the build_tools nedd not be ${build}, it can be a simpler dir name, like "tools" 4) Instead of setting --prefix=$cdir, you should redefine the install rule in the top-level Makefile.am. Something like: if TOOLS_ONLY install: : do nothing endif 6) distclean would be broken. One would have to use an automake ``hook'' to fix it. The distclean in the top level Makefile.am should do something like rm -rf ${build_tools} Do you have time and mood to implement this? Thanks again for your contribution, Stepan
http://lists.gnu.org/archive/html/bug-texinfo/2004-10/msg00014.html
CC-MAIN-2018-43
refinedweb
229
74.59
This is your resource to discuss support topics with your peers, and learn from each other. 02-12-2013 05:52 PM Hi everyone! I have a problem if I add SegmentdControl to a Container with LayoutOrientation.LeftToRight with any control I have broken UI. For example: import bb.cascades 1.0 Container { layout: StackLayout { orientation: LayoutOrientation.LeftToRight } SegmentedControl { verticalAlignment: VerticalAlignment.Center options: [ Option { text: qsTr("One"); } ,Option { text: qsTr("Two"); } ,Option { text: qsTr("Three"); } ,Option { text: qsTr("Four"); } ] } Button { verticalAlignment: VerticalAlignment.Center } } I think it's a bug. but maybe somebody can help me to make workaround? Thanks, Maksim Solved! Go to Solution. 02-12-2013 06:06 PM I have tried with some ways to chagne the size of segmentcontrol, but all are not available. You have to let the segmentcontrol to occupy the whole width of the screen. 02-12-2013 06:35 PM try to set spaceQuota in StackLayoutProperties for SC layoutProperties: StackLayoutProperties { spaceQuota: 1 } You can also set maxWidth and minWidth to width of root Container using LayoutUpdateHandler. 02-13-2013 03:38 AM 02-13-2013 04:10 AM Also I tried set min/max Width but it ignored so I set preferedWidth and then I can see: Container { SegmentedControl { preferredWidth: 500 Option { text: "one" } Option { selected: true text: "two" } Option { text: "three" } Option { text: "four" } } } 02-13-2013 06:37 AM Hi again. Can you please tell me why you want Segmented Control and Button in the same line ? Please explain what do you want to achieve. You set orientation: LayoutOrientation.LeftToRight so it means that controls are layouted from Left to Right. Change it to orientation: LayoutOrientation.TopToBottom then Button will be under SegmentedControl 02-13-2013 06:40 AM Container { layout: StackLayout { orientation: LayoutOrientation.LeftToRight } SegmentedControl { verticalAlignment: VerticalAlignment.Center layoutProperties: StackLayoutProperties { spaceQuota: 9 } options: [ Option { text: qsTr("One") }, Option { text: qsTr("Two") }, Option { text: qsTr("Three") }, Option { text: qsTr("Four") } ] } Button { verticalAlignment: VerticalAlignment.Center layoutProperties: StackLayoutProperties { spaceQuota: 1 } } } 02-13-2013 07:21 AM igosoft thank you for quick reply! No, LeftToRight it's correct. Our UI Disigner want to add a button here. I tryed your code, and at first time this looks good, but if you change selected option then you can see incorrect position of selection area. Looks like the selection area size and/or positions is hardcoded So main problem now it's incorrect size/position of selection area. 02-13-2013 07:33 AM Please ask UI designer to change UX/UI of this screen. 02-13-2013 09:19 AM Yes, I also met this kind of issue. I have to change the segmentcontrol into imagetogglebutton finally because the UI design can not be changed too much. You can have a try with ImageToggleButton
https://supportforums.blackberry.com/t5/Native-Development/change-size-of-SegmentedControl-it-possible/m-p/2161537/highlight/true
CC-MAIN-2016-30
refinedweb
459
50.63
Change the API of MArray to allow resizable arrays. [project @ 2002-08-08 22:29:28 by reid] Hugs provides makeForeignPtr instead of newForeignPtr. It is hoped that these macros overcome the difference. + #ifdef __HUGS__ + #define MAKE_ARRAY(x) makeForeignPtr (x) free + #else + #define MAKE_ARRAY(x) newForeignPtr (x) (free (x)) + #endif I could probably get away with introducing a Haskell functions instead of a macro.. [Untested since Data.Array.Base requires the pattern guard extension so I can't load it. Still, I think this will be ready to go once we fix D.A.B] [project @ 2001-07-04 10:51:09 by simonmar] oops, better import Prelude (we have to explicitly import Prelude in all modules that aren't compiled with -fno-implicit-prelude so that ghc --make gets the dependencies right. This should really be fixed in CompManager somehow).
http://git.haskell.org/packages/random.git/atom?f=Data/Array/Storable.hs
CC-MAIN-2019-43
refinedweb
141
66.03
Alexander Graf wrote: > On 24.11.2009, at 19:33, Jan Kiszka wrote: > >> Alexander Graf wrote: >>> On 24.11.2009, at 19:12, Jan Kiszka wrote: >>> >>>> Alexander Graf wrote: >>>>> On 24.11.2009, at 19:01, Jan Kiszka wrote: >>>>> >>>>>> Alexander Graf wrote: >>>>>>> While x86 only needs to sync cr0-4 to know all about its MMU state and >>>>>>> enable >>>>>>> qemu to resolve virtual to physical addresses, we need to sync all of >>>>>>> the >>>>>>> segment registers on PPC to know which mapping we're in. >>>>>>> >>>>>>> So let's grab the segment register contents to be able to use the "x" >>>>>>> monitor >>>>>>> command and also enable the gdbstub to resolve virtual addresses. >>>>>>> >>>>>>> I sent the corresponding KVM patch to the KVM ML some minutes ago. >>>>>>> >>>>>>> Signed-off-by: Alexander Graf <address@hidden> >>>>>>> --- >>>>>>> target-ppc/kvm.c | 30 ++++++++++++++++++++++++++++++ >>>>>>> 1 files changed, 30 insertions(+), 0 deletions(-) >>>>>>> >>>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c >>>>>>> index 4e1c65f..566513f 100644 >>>>>>> --- a/target-ppc/kvm.c >>>>>>> +++ b/target-ppc/kvm.c >>>>>>> @@ -98,12 +98,17 @@ int kvm_arch_put_registers(CPUState *env) >>>>>>> int kvm_arch_get_registers(CPUState *env) >>>>>>> { >>>>>>> struct kvm_regs regs; >>>>>>> + struct kvm_sregs sregs; >>>>>>> uint32_t i, ret; >>>>>>> >>>>>>> ret = kvm_vcpu_ioctl(env, KVM_GET_REGS, ®s); >>>>>>> if (ret < 0) >>>>>>> return ret; >>>>>>> >>>>>>> + ret = kvm_vcpu_ioctl(env, KVM_GET_SREGS, &sregs); >>>>>>> + if (ret < 0) >>>>>>> + return ret; >>>>>>> + >>>>>>> env->ctr = regs.ctr; >>>>>>> env->lr = regs.lr; >>>>>>> env->xer = regs.xer; >>>>>>> @@ -125,6 +130,31 @@ int kvm_arch_get_registers(CPUState *env) >>>>>>> for (i = 0;i < 32; i++) >>>>>>> env->gpr[i] = regs.gpr[i]; >>>>>>> >>>>>>> +#ifdef KVM_CAP_PPC_SEGSTATE >>>>>>> + if (kvm_check_extension(env->kvm_state, KVM_CAP_PPC_SEGSTATE)) { >>>>>>> + env->sdr1 = sregs.sdr1; >>>>>>> + >>>>>>> + /* Sync SLB */ >>>>>>> + for (i = 0; i < 64; i++) { >>>>>>> + ppc_store_slb(env, sregs.ppc64.slb[i].slbe, >>>>>>> + sregs.ppc64.slb[i].slbv); >>>>>>> + } >>>>>>> + >>>>>>> + /* Sync SRs */ >>>>>>> + for (i = 0; i < 16; i++) { >>>>>>> + env->sr[i] = sregs.ppc32.sr[i]; >>>>>>> + } >>>>>>> + >>>>>>> + /* Sync BATs */ >>>>>>> + for (i = 0; i < 8; i++) { >>>>>>> + env->DBAT[0][i] = sregs.ppc32.dbat[i] & 0xffffffff; >>>>>>> + env->DBAT[1][i] = sregs.ppc32.dbat[i] >> 32; >>>>>>> + env->IBAT[0][i] = sregs.ppc32.ibat[i] & 0xffffffff; >>>>>>> + env->IBAT[1][i] = sregs.ppc32.ibat[i] >> 32; >>>>>>> + } >>>>>>> + } >>>>>>> +#endif >>>>>>> + >>>>>>> return 0; >>>>>>> } >>>>>>> >>>>>> What about KVM_SET_SREGS in kvm_arch_put_registers? E.g. to play back >>>>>> potential changes to that special registers someone did via gdb? >>>>> I don't think you can actually change the segment values. At least I >>>>> can't imagine why. >>>> Dunno about PPC in this regard and how much value it has, but we have >>>> segment register access via gdb for x86. >>> The segments here are more like PLM4 on x86. >> Even that will be settable one day (gdb just do not yet know much about >> x86 system management registers). >> >>>>> I definitely will implement SET_SREGS as soon as your sync split is in, >>>>> as that's IMHO only really required on migration. >>>>> >>>> Migration is, of course, the major use case. >>>> >>>> Still I wonder why not making this API symmetric when already touching it. >>> I was afraid to introduce performance regressions - setting the segments >>> means flushing the complete shadow MMU. >>> >> Unless it costs milliseconds, not really critical, given how often >> registers are synchronized. >> >> BTW, I noticed that ppc only syncs the SREGS once on init, not on reset >> - are they static? > > So far SREGS are only used for setting the PVR (cpuid in x86 speech). There's > no need to reset that on reset :-). Then I don't get why you need to re-read them during runtime - user space should know the state and should be able push it into the CPUState on init. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux
https://lists.gnu.org/archive/html/qemu-devel/2009-11/msg01531.html
CC-MAIN-2019-35
refinedweb
581
67.15
MVC :: Forms Authentication To Hide / Show Website Elements?Jun 1, 2010 When I print [Code].... [Code].... When I print [Code].... [Code].... I am trying to hide a group of textfields i have nested in a div element. Here is an example of a div element i am trying to hide along with the script function: [Code].... [Code].... I am trying to force to show to the Logon popup when the session is timeout in Integrated Windows Authentication Enabled website. The session_timeout is firing during the session timeout, but the User.Identity.IsAuthenticated is true. How force to use the Windows Logon Screen when the session is timeout.View 4 Replies I am trying to display elements in a web page using <li>. The structure of the page is as follows <ul> <li> (How to hide its visibility if this has no child elements) <ul> <li> item 1</li> <li> item 2</li> <li> item 3</li> </ul> </li> </ul> My question is if the li element has no child items how do I hide it. I need to do this dynamically. If I find that I have no records to display as <li> item </li>....I should be able to hide the parent <li>. One soultion is make the <li id="something" runat=server> if the child elements are not be shown I tried doing childelement.parent.Visible = false. Is there a way to prevent non-displayed elements from appearing in the ASPX Design View editor?By "non-displayed elements", I mean the background elements (Managers, DataSources, Validators, etc) that show up as grey boxes containing the type and id.If I have several of those at the top of the page, I can't see much of the preview of my page.View 1 Replies I have one dropdownlist like this bellow.. the problem is that if I have more than 30 ListItem the list will have 30 elements/rows display and the rest to scroll. Is there a way to show only 10 elements and scroll the rest? I read in some pages that this is not possible. Is that true? <asp:DropDownList <asp:ListItem</asp:ListItem> <asp:ListItem</asp:ListItem> . . . <asp:ListItem</asp:ListItem> </asp:DropDownList> Is it possible to show empty items in ListView in ASP.NET 4.0 ? For example, there are 2 items in the datasource, but I want to show them + 2 other empty items.View 2 Replies Basically I want to load a HTML document and using controls such as multiple check boxes which will be programmed to hide, delete or show HTML elements with certain ID's. So I am thinking I would have to set an inline CSS property for visibility to: false on the ones I want to hide or delete them altogether when necessary. I need this so I don't have to edit my Ebay HTML templates in dreamweaver all the time, where I usually have to scroll around messy code and manually delete or add tags and their respective content. Whereas I just want to create one master template in dreamweaver which has all the variations that my products have, since they are all of the same genre with slight changes here and there and I just need to enable and disable the visibility of these variants as required and copy + paste the final html. I haven's used Windows Forms before, but tried doing this in WebForms which I do know a bit. I am able to get the result that I want by wrapping any HTML elements in a <asp:PlaceHolder></asp:PlaceHolder> and just setting that place holders visibility to false after the associated checkbox is checked and a postback occurs, finally I add a checkbox/button control that removes all the checkboxes, including itself etc for final html. But this method seems just like too much pain in the ass as I have to add the placeholder tags around everything that I need control over as ordinary html elements do not run at server, also webforms injects a bunch of Javascript and ViewState data so I don't have clean HTML which I can just copy after viewing the page source. Any tips/code that you can suggest to achieve the desired effect with the least changes required to existing HTML documents? Ideally I would want to load the HTML document in, have a live design preview of it and underneath have a bunch of well labelled checkboxes programmed to hide, delete or show elements with certain ID's. I have looked all over for elegant solutions to this not so age-old question. How can I lock down form elements within an ASP.Net MVC View, without adding if...then logic all over the place? Ideally the BaseController, either from OnAuthorization, or OnResultExecultion, would check the rendering form elements and hide/not render them based on role and scope. Another approach I have considered is writing some sort of custom attributes, so as to stay consistent with how how we lock down ActionResults with [Authorize]. Is this even possible without passing a list of hidden objects to the view and putting if's all over? Other background info: We will have a database that will tell us at execution time (based on user role/scope) what elements will be hidden. We are using MVC3 with Razor Viewengine. We're utilizing a BaseController where any of the Controller methods can be overridden.View 1 Replies I want to hide or show a wizard step. Based on the inputs on the prvious page. For example If some one choose "4", the step 4 should be visible. else Step 4 should be hidden. I tried something like this [Code].... I am using the standard template for web form with login and menu.. I would like to disable the menu when u enter the page (done this by visable = false). but after a user have logged in, i want it to be shown.. How do i get a hold on the navigation menu when a user login? I want to make a table visible when a checkbox is ticked and invisible when unchecked, any suggestions?At the moment I have a webform with 1 checkbox and my table which I would like to show/hide.View 13 Replies. I am in the process of re-developing my website into ASP.NET from a product called LogiXML. My background is in front end design using XHTML, CSS and Javascript with a good understanding of SQL so this is a HUGE leap for me. In one of my Views I have created an asp data source to count the number of scheduled meetings: <asp:SqlDataSource ID="CountRS" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand=" SELECT count(*) as countRS FROM WebMeeting_Calendar WHERE MeetingDate > getdate() and MeetingType = 2 "> </asp:SqlDataSource> Next, I created an asp panel where I wanted to control the "Visible" element by the value returned in the data source but even when using a simple 1 = 0 (to return False) is not working for me.... Basically, if the count of meetings is zero, I don't want the panel to show. <asp:Panel ID="Panel1" runat="server" Visible="<%# 1 = 0 %>" > </asp:Panel> I have a page with a label and a textbox with two button edit and update. when the user clicks on edit the labels get hidden and text box appears. And when he clicks on update the textbox disappears and label appears I have achieved making the text box disappear and label appear but when I click on edit the textbox is not appearing .... using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Institute : System.Web.UI.Page { [Code] .... I'm trying to show/hide panels in a masterpage depending on what page the user is on. I was trying this.. [Code].... But it didnt work. Basically I have a dropdownlist (ddlSystem) and an AJAX update panel. Inside the update panel I have an asp:GridView (gvSystemSpecs) and an asp:Image (imgLoad). For triggers on the update panel I have: [Code].... When the user changes ddlSystem it makes a service call to pull back a bunch of data and populate the gridview. I want to be able to show a loading image (imgLoad) when ddlSystem is changed. Once the function to pull data and populate the gridview is complete I need to hide the loading image. I think I'll need to use some JavaScript, but what ever I try doesn't work and I don't know if the update panel has anything to do with it... I am using a ListView to display a list of Products. My DB has a field named productImage. What I want to do is show this row if there is a picture associated with the product, Hide this row is there is no picture associated with the product.So on the Listview_ItemCreated Event I have this: [Code]... i placed 2 user controls UC1 and UC2 in main.aspx page. i need to show or hide some controls in UC2 from the button in UC1 how?View 5 Replies I have 3 linkbuttons in a footer-template and want to be able to hide/show one of them at will. How to do so?View 8 Replies How to visible Panel on dropdown selected value with out Reload page...View 1 Replies have the following line on my aspx page.. [Code].... My issue is, that i only need this to display when my datalist is displayed and has records.. every time else, it needs to be hidden ( Visible = false ) on page_load i have it set to btnAddVideo.Visible = false; btnAddVidCart.Visible = false; Should i only be check then within my dropdown event? By default the page is displaying my datalist, so it should be hidden.. then you have a dropdown to make a selection, if you change it, then it should show ONLY if there are records to be displayed..View 14 Replies
http://asp.net.bigresource.com/MVC-Forms-authentication-to-hide-show-website-elements--bfOaE8bbX.html
CC-MAIN-2019-09
refinedweb
1,680
71.95
#include <stdlib.h> int putenv(char *string); The putenv() function makes the value of the environment variable name equal to value by altering an existing variable or creating a new one. In either case, the string pointed to by string becomes part of the environment, so altering the string will change the environment. The string argument points to a string of the form name=value. The space used by string is no longer used once a new string-defining name is passed to putenv(). The putenv() function uses malloc(3C) to enlarge the environment. After putenv() is called, environment variables are not in alphabetical order. Upon successful completion, putenv() returns 0. Otherwise, it returns a non-zero value and sets errno to indicate the error. The putenv() function may fail if: Insufficient memory was available. The putenv() function can be safely called from multithreaded programs. Caution must be exercised when using this function and getenv(3C) in multithreaded programs. These functions examine and modify the environment list, which is shared by all threads in a program. The system prevents the list from being accessed simultaneously by two different threads. It does not, however, prevent two threads from successively accessing the environment list using putenv() or getenv(). See attributes(5) for descriptions of the following attributes: exec(2), getenv(3C), malloc(3C), attributes(5), environ(5), standards(5) The string argument should not be an automatic variable. It should be declared static if it is declared within a function because it cannot be automatically declared. A potential error is to call putenv() with a pointer to an automatic variable as the argument and to then exit the calling function while string is still part of the environment.
http://docs.oracle.com/cd/E36784_01/html/E36874/putenv-3c.html
CC-MAIN-2016-26
refinedweb
286
55.24
Don Knuth is often quoted as saying, "premature optimization is the root of all evil" when it comes to computer programming. Attribution usually comes from "Structured Programming with go to Statements," a journal article he published in the mid-1970s. Although the phrase makes for a great soundbite, I think his entire explanation makes the point better: trick to optimizing code is to learn when you are chasing the 97% tail versus getting to the 3% heart of it. Experience helps greatly with answering that question but so can simple tests and empirical data. When scripting with Python geometry libraries (ArcPy, ArcGIS API for Python, Shapely, GeoDjango, etc....), it is quite common to encounter Python lists containing geometry coordinates, and turning those coordinates into geometry objects involves calling geometry constructors. For ArcPy geometry classes, like most Python classes, the default constructor is accessed by calling the class and passing arguments. For Polygon—ArcPy classes | ArcGIS Desktop: SyntaxPolygon (inputs, {spatial_reference}, {has_z}, {has_m}) In addition to the ArcPy geometry class constructors, there are several other constructors for creating ArcPy geometries: - FromWKT—ArcPy Functions | ArcGIS Desktop - Create a new Geometry object from a well-known text (WKT) string. - FromWKB—ArcPy Functions | ArcGIS Desktop - Create a new geometry object from a well-known binary (WKB) string stored in a bytearray. - AsShape—ArcPy Functions | ArcGIS Desktop - Converts Esri JSON or GeoJSON to ArcPy geometry or feature set objects. Given there are multiple ways to construct ArcPy geometries, it is reasonable for someone to wonder which constructor they should or shouldn't use. The descriptions of arcpy.FromWKT(), arcpy.FromWKB(), and arcpy.AsShape() tell us those constructors work with specific geometry representations or encodings. When it comes to which constructor someone should use, I think Don Knuth would argue the one that most closely matches your data's existing structure, i.e., don't overthink it. I recently had reason to overthink ArcPy geometry constructors, or thought I had reason to, so I set about running some basic timing tests to gather more information before deciding whether to refactor some code involving arcpy.Polygon(). Using the simple, multipart polygon from A Case of Missing Prefixes: ArcGIS ...Geometries, I created four tests constructing the geometry from a Python list containing coordinates: import arcpy import timeit poly_rings = [ [[15,0], [25,0], [25,10], [15,10], [15,0]], [[18,13], [24,13], [24,18], [18,18], [18,13] ]] def FromArcPyArray(): aarr = arcpy.Array( arcpy.Array(arcpy.Point(*xy) for xy in ring) for ring in poly_rings ) return arcpy.Polygon(aarr) def FromEsriJSON(): esri_json = {"type":"Polygon", "rings":poly_rings} return arcpy.AsShape(esri_json, True) def FromGeoJSON(): geojson = {"type":"Polygon", "coordinates":poly_rings} return arcpy.AsShape(geojson) def FromWKT(): wkt = "MULTIPOLYGON({})".format( ",".join("(({}))".format( ", ".join("{} {}".format(*xy) for xy in ring) ) for ring in poly_rings) ) return arcpy.FromWKT(wkt) Using 26.6. timeit — Measure execution time of small code snippets — Python 2.7.15 documentation from Python 2.7.14 bundled with ArcGIS Desktop 10.6.1: >>> for ctor in [FromArcPyArray, FromEsriJSON, FromGeoJSON, FromWKT]: ... pg = ctor() ... print("\n".join( ... str(i) for i in [ctor.__name__, timeit.timeit(ctor, number=10000), ""] ... )) ... FromArcPyArray 20.2141071389 FromEsriJSON 4.77303549343 FromGeoJSON 20.2831866771 FromWKT 4.03049759916 >>> I must admit, the results aren't what I was expecting. I expected some timing differences between the various constructors, but I didn't expect some to be 5x faster than others. What I really didn't expect is the ArcPy Polygon constructor nearly being the slowest. Since I have ArcGIS Desktop 10.6.1 and ArcGIS Pro 2.2.3 on the same machine, I just had to run the same tests using 27.5. timeit — Measure execution time of small code snippets — Python 3.6.7 documentation from Python 3.6.5 bundled with ArcGIS Pro 2.2.3: >>> for ctor in [FromArcPyArray, FromEsriJSON, FromGeoJSON, FromWKT]: ... pg = ctor() ... print("\n".join( ... str(i) for i in [ctor.__name__, timeit.timeit(ctor, number=10000), ""] ... )) ... FromArcPyArray 10.2499491093713 FromEsriJSON 0.9167164168891304 FromGeoJSON 9.85043158674398 FromWKT 0.5525892638736423 >>> What?! This goes beyond unexpected, this is outright surprising. It is good to see a nearly 50% decrease in the ArcPy Polygon constructor, but it is amazing to see 80% and 85% decreases in Esri JSON and WKT constructors. The Esri JSON constructor went from 4x to 11x faster than the ArcPy Polygon constructor, and the WKT constructor is now 19x faster! When basic timing tests come back with results this surprising, one has to wonder whether the relative timing differences will hold up when the tests use larger, real-world data. To answer that question, I downloaded the USA States layer package included with Esri Data & Maps and available on ArcGIS.com. I wanted a multipart polygon, and Michigan was the first state that came to mind. It turns out, because of all the small islands in the Great Lakes, Michigan is a very multipart polygon: 450 parts and 166,107 points. >>> for ctor in [FromArcPyArray, FromEsriJSON, FromGeoJSON, FromWKT]: ... pg = ctor() ... print("\n".join( ... str(i) for i in [ctor.__name__, timeit.timeit(ctor, number=100), ""] ... )) ... FromArcPyArray 1267.5038210736802 FromEsriJSON 141.83130611867819 FromGeoJSON 464.2651427417986 FromWKT 86.92622569438026 >>> For the most part, the relative results stay consistent when using a larger and more complex multipart polygon. The ArcPy Polygon constructor does scale slightly better than the Esri JSON and WKT constructors, going from 11x and 19x slower to 9x and 15x slower respectively, but those improvements aren't nearly enough to make up for the overall slowness of the constructor. Overall, I don't really know what to think about the GeoJSON constructor. With small and simple polygons, it is as slow or slower than the ArcPy Polygon constructor. With larger polygons it scales better than all of the other constructors, in relative terms, but it is still quite slow overall. Comparing the timings between the simple example polygon and the Michigan polygon, the constructors appear to scale roughly linearly with the number of points/vertices in the polygon. For the Michigan polygon the number of iterations was lowered 2 orders of magnitude (10,000 to 100) while the run times increased by roughly 2 orders of magnitude, leading to run times that are 4 orders of magnitude longer. The magnitude of increase in run times is matched by an equal magnitude of increase points/vertices (10 to 166,107). The results of these tests surprised me, truly. I am not going to wholesale abandon the ArcPy geometry default constructors, but I do think they are worth a solid look when optimizing code. Nice job... similar ratios using python 3.6.x 6.598932267703276 0.6802073404051043 5.878672602499137 0.43038649174741295
https://community.esri.com/blogs/tilting/2018/12/06/biding-time-arcpy-geometry-constructors
CC-MAIN-2019-04
refinedweb
1,108
57.06
that may seem obvious but I've seen contradictory statements : Is JPA part of EJB 3.0 ? I'm no specialist and It's quite confusing for me. If so, JPA manipulates Entity Beans ? These entity beans being the interface between the persistence layer and the business layer implementing the logic with stateless beans ? The underlying question for me is how to implement a "search for user based on various criteria" function, where the "search" request -its string representation- should be built ? I mean, if JPA is not part of EJB, my beans shouldn't be aware of the data model, right ? Tell me if I'm completely wrong but I'm kind of lost between all the J* and the JSR XXX... :-).. thanks Is JPA part of EJB 3.0 ? Yes and no... Yes because every application server claiming to implement EJB 3.0 spec must also provide JPA implementation. No because JPA can be easily outside of EJB, in standalone applications or Spring-managed ones. JPA manipulates Entity Beans ? Entity beans was a scary idea in pre-3.0 EJBs. JPA uses the term entities to distinguish itself from the disgraceful history.. But there is nothing preventing you from using JPA in EJB. In your scenario you'll have a stateless EJB constructing the query and interacting with the database via JPA. Technically speaking you will call methods on EntityManager injected to your bean: @Stateless public class SearchService { @PersistenceContext private EntityManager em; public List<User> findUsersBornAfter(Date date) { return em. createQuery("SELECT u FROM User u WHERE u.birthDate > :birthDate ORDER BY name"). setParameter("birthDate", date). getResultList(); } } As you can see the business layer is aware of the data model (obviously), but there is no dependency on EJB/business services as far as entities are concerned. In this example JPQL (query) is formed in the service layer and User is a JPA entity. Calling getResultList() causes the JPA provider to translate JPQL to SQL, run the query and transalte the results back to User object instances. Is the border between EJB and JPA clear now?
https://codedump.io/share/1WYt8DHXzKEH/1/boundary-between-ejb-30-and-jpa
CC-MAIN-2017-34
refinedweb
347
57.57
online arcade game called ping, online mario arcade games, x man arcade games, trackand field games arcade games, reflexive arcade games keygen fff. ms pac-man arcade game, list of 360 arcade games, cruising driving arcade game, scary arcade games, game dollar arcade. import arcade games, arcade game icons, insane arcade games, fairies game from real arcade, drive arcade games 1320 moto urban fever. arcade game character qball, arcade games on computer, haunted house cherry master arcade game, pin-up art arcade game, arcade games from the, hydro thunder arcade game for sale. ninja kiwi games arcade power pool, school arcade games, smartphone arcade games, 926 reflexive arcade games, pinball arcade game rentals milwaukee wi. arcade games at home robotron 2084, arcade games 1980s, free arcade fishing games, 180 arcade games, ms. pacman arcade game. unlock xbox 360 arcade games, xbox 360 arcade original xbox games, classic arcade games all in one, ski arcade dance interactive arcade game, president arcade games. anacon arcade games, extreme hunting arcade game cheat, ambulance arcade game, retro arcade games downloads, x man arcade games. arcade game time crisis 1, top 20 arcade games, epoc games arcade pocket hockey pong, arcade video racing games, austin texas arcade restaurant games pool. arcade game pad ps2, tmnt the arcade game, play arcade games skill crane, game spy arcade adware, circus charlie arcade game. new free arcade games download, wwf arcade game, first video arcade game, spy hunter old computer game arcade, the simpsons arcade game flash. original xbox games on xbox 360 arcade, coin mechanism for 1956 arcade game, classic arcade games pool, online arcade games sand zombie, on line free arcade games. Categories - Iphone arcade games - vietnam arcade games star wars arcade game replacement gears top 25 arcade games top skater arcade game photohunt arcade game fast and the furious arcade games games like arcade and dress up marquee overlay arcade video games top arcade games amusement - 1990 arcade games online - fat arcade games - golden axe arcade game - q bert arcade game - bosconian arcade game - marquee overlay arcade video games - nick arcad . com / games - arcade games for businesses - games arcades downloads cadillacs dinossauros - extreme hunting arcade game cheat
http://manashitrad.ame-zaiku.com/pac-land-arcade-game.html
CC-MAIN-2019-04
refinedweb
362
55.58
The Beautiful Soup Python library is an excellent way to scrape web pages for their content. I recently wanted a reasonably accurate list of official (ISO 3166-1) two-letter codes for countries, but didn't want to pay CHF 38 for the official ISO document. The ISO 3166-1 alpha-2 contains this information in an HTML table which can be scraped quite easily as follows. First, get a local copy of the Wikipedia article: import urllib.request url = '' req = urllib.request.urlopen(url) article = req.read().decode() with open('ISO_3166-1_alpha-2.html', 'w') as fo: fo.write(article) Then load it and parse it with Beautiful Soup. Extract all the <table> tags and search for the one with the headings corresponding to the data we want. Finally, iterate over its rows, pulling out the columns we want and writing the cell text to the file 'iso_3166-1_alpha-2_codes.txt'. The file should be interpreted as utf-8 encoded – your browser may or may not realise this. from bs4 import BeautifulSoup # Load article, turn into soup and get the <table>s. article = open('ISO_3166-1_alpha-2.html').read() soup = BeautifulSoup(article, 'html.parser') tables = soup.find_all('table', class_='sortable') # Search through the tables for the one with the headings we want. for table in tables: ths = table.find_all('th') headings = [th.text.strip() for th in ths] if headings[:5] == ['Code', 'Country name', 'Year', 'ccTLD', 'ISO 3166-2']: break # Extract the columns we want and write to a semicolon-delimited text file. with open('iso_3166-1_alpha-2_codes.txt', 'w') as fo: for tr in table.find_all('tr'): tds = tr.find_all('td') if not tds: continue code, country, year, ccTLD = [td.text.strip() for td in tds[:4]] # Wikipedia does something funny with country names containing # accented characters: extract the correct string form. if '!' in country: country = country[country.index('!')+1:] print('; '.join([code, country, year, ccTLD]), file=fo) Comments are pre-moderated. Please be patient and your comment will appear soon. Radish 10 months, 3 weeks ago Thanks - very useful! I'm something of a beginner with Python and have adapted this to what I'm working on. How would you go about putting this into a pandas dataframe?Link | Reply christian 10 months, 3 weeks ago Hello, Radish,Link | Reply There is a follow-up post that deals with using Pandas. Christian Sabika 10 months, 3 weeks ago Hi Christian , I have a same requirement of reading the table contents to pandas dataframe . The solution u have provided is reading table contents using data frame and not storing in dataframe . If u have something similar please help .Link | Reply Hi Radish , if u have found any such solution kindly revert . Thanks in advance ! christian 10 months, 3 weeks ago Do you mean you wish to read the semicolon-delimited text file, iso_3166-1_alpha-2_codes.txt, into a Pandas dataframe? Does this work:Link | Reply import pandas as pd df = pd.read_csv('iso_3166-1_alpha-2_codes.txt', sep=';', header=None, names=['code', 'country', 'year', 'ccTLD']) New Comment
https://scipython.com/blog/scraping-a-wikipedia-table-with-beautiful-soup/
CC-MAIN-2019-51
refinedweb
508
68.57
These are chat archives for devslopes/swiftios9 We've Officially Changed Chatrooms. Join here: How can i have a private var in my super class and set it in my subclass when both classes are not defined in the same file? (Now i get the following error: use of unresolved identifier '_hp') I think i'm running into the access control level 1 (private entities can only be accessed from within the source file where they are defined.) ps. i'm currently building the OOP exercise app :smile: :+1: class Character { private var _hp = 100 } class Solider: Character { func attemptAttack(attackPower: Int) { self._hp -= attackPower //This line gives the error when placing these classes in seperate files } } Swift provides three different access levels for entities within your code. These access levels are relative to the source file in which an entity is defined, and also relative to the module that source file belongs to.. private var _hp: Int var hp: Int { return _hp } lecture 47 (the OOP exercise) I normally use the accessor as i learned that in this course but found it strange that you can't access private var from a subclass in essence it is the same class. import Foundation class Character { private var _health: Int = 100 private var _strength: Int = 10 var strength: Int { get { return _strength } set { _strength = strength } } var health: Int { get { return _health } } var isAlive: Bool { get { if health <= 0 { return false } else { return true } } } init(startHealth: Int, startStrength: Int) { self._health = startHealth self._strength = startStrength } func attemptAttack(attackPower: Int) -> Bool { self._health -= attackPower return true } } import Foundation class Enemy: Character { var loot: [String] { return ["Rusty Dager", "Lint"] } var type: String{ return "Grunt" } func dropLoot() -> String? { if !isAlive { let rand = Int(arc4random_uniform(UInt32(loot.count))) return loot[rand] } else{ return nil } } } @Wrenbjor i have the same setup but wanted to have a special case for my enemy which gives an extra HP when the attackPower is lower than it's IMMUNE_MAX Offcourse i can add a function which does that in the superclass but this undermines (my) idea of polymorphism/inheritance
https://gitter.im/devslopes/swiftios9/archives/2015/11/03?at=5638ccc264376ec44425e13b
CC-MAIN-2019-39
refinedweb
348
54.26
This action might not be possible to undo. Are you sure you want to continue? IN HI 11 1 524 . . . WDEN . MAINZ . PENNSYLVANIA in association with UNIVERSAL EDITION LONDON .ANTON WEBERN THE PATH TO THE NEW MUSIC Edited by Willi Reich THEODORE PRESSER COMPANY BRYN MAWR. ZURICH . All rights No part of this publication may be translated or strictly reserved in all countries. English Edition Copyright 1963 by Theodore Presser Co. Vienna Original German Edition Copyright I960 by Universal Edition A.Cover design : Willi Bahner. Pennsylvania.. reproduced without the consent of the publishers. Wien. ^ .G.. The Path to Twelve-Note Composition 42 Postscript 57 Translated by Leo Black ..CONTENTS Page Preface 7 Tlie Path to the New Music. . The latter is its usual meaning in these lectures. and also the cohesion or unity brought about by these connections.TRANSLATOR'S PREFACE has been used. ** Note ".B. perhaps apologies are due to American and German readers who would have preferred "tone ". and here Zusammenhange ** " fc * : L. " Unity "= Zusammenhang **. The translator's resolve to adhere to the " Die Reihe English form was strengthened by the consideration that in *% to which this volume is in a sense a companion. form '% ** structure " and content " " for the sake of consistency the ugly but literal shape has been used. but at the end of the second series Webern even " " refers to "unities" was clearly impossible. " tone " is frequently used with its specific English meaning of a pure impulse devoid of overtones. relation" relatedness '% ships between entities or parts of the same entity. The German can imply both connections. " " Shape = "Gestalt". . " At different points the word could ** have been " " ** translated as feature '% idea **. connections : A note on some frequently-occurring terms may be appropriate. . Not until long after the war and Webern's tragic end could I go was then through the archives of the periodical. who was very characteristic that Webera should have called both cycles paths. as were frequent long pauses and deep intakes of breath.PREFACE unnecessary to justify publication of the sixteen lectures given by Webern and 1933 in a private house in Vienna. The extraordinary brevity of some of the texts. Here the chronological order of the two cycles of eight lectures has been reversed. It is He. We wanted to print them verbatim in the musical periodical 23 ". before an audience paying a small entrance fee. which were safe in Switzerland. what does need explaining is the long delay in publishing them." " always under way. for objective reasons that will immediately be obvious. Universal Edition at once agreed gladly to my proposal that the lectures should now be published. and thus of his wonderful and pure personality. My friend Dr. and it that Ploderer's transcripts. is explained by the fact that on those evenings Webern spoke less and played whole works or individual movements on the piano instead. The occasional repetitions were used quite consciously by Webern to intensify and heighten his remarks. In this way there is a natural progression from the elementary ideas treated in 1933 to the complex circumstances of twelve-note music sketched in 1932. only a few obvious stenographic errors have been corrected. which was the result of unusual circumstances. But the periodical's small circulation It is early in 1932 temporarily prevented their publication. a Viennese lawyer who took his own life in September 1933. and later their sharp attacks on the cultural politics of the Nazis would have exposed Webern to serious consequences. which I published in Vienna at that time. also came to light. In this form they offer not only their own valuable contents but also a highly life-like idea of Webern's curiously drastic. which reconciled high erudition and the keenest artistic thinking with an almost child-like expression of feeling. Rudolf Ploderer. was also a close personal friend of Webern. and took down the lectures in short" hand. particularly from the 1932 cycle. First ** . They are here reprinted exactly according to the shorthand notes. unforced way of talking. All this was an essential factor in the unprecedented urgency of his lectures and the shattering impression they made on all their listeners. already quite yellow with age." wanted to show others the way too. Everything arbitrary or illusory falls away: here is necessity. " wants to express. and removed from all human arbitrariness. Goethe's remark about the art of antiquity. These lectures are handed down to posterity as a reflection of those experiences as a token of gratitude for all the beauty and profundity he gave us by precept and example. but only for those " equipped with creation. And this leads us to the view that the things " " aesthetic but are determined by natural treated by art in general are not laws. he would then reveal the law governing the onward course of what was at present new. also follows " the same line: These high works of art were at the same time brought forth as humanity's highest works of nature." is Here Webern adopted Goethe's view. let him hear! there controlled . according to true and natural laws." " he wanted to show what had at various times over the centuries been new in music. He wants to smooth the way to the great masters of music. we must strive to discover the laws according to which nature. tends to produce and does produce when she can. in the particular form of human nature. in its particular " form man ". as documentation of his lofty spirit. meaning that it had never been said before. as a monument to his noble humanity. here is God. Just as the researcher nature in general into nature strives to discover the rules of order that are the basis of nature. It could be a matter only of to which nature in general. Free fantasy of that kind can only be countered by saying that not a word here which Webern did not himself speak. which Webern quotes. in the fiery yet way that made each meeting with him an unforgettable experience. He who has ears to hear." Man is only the vessel into which poured what *' Just as Goethe defines the essence of colour as natural law as related to the sense of sight." Webern wants to see sound appreciated as natural law in relation to the sense of hearing. and that all discussion of art can only take place along these lines. which he explained with copious " getting to know the laws according quotation. Willi Reich . From the laws that resulted in the course of this. is productive. this realisation and with reverence for the secret of artistic The musical of Webern's is literature of recent years may have given many readers an idea spiritual personality quite different from the one that emerges from these lectures. for people not professionally con- cerned with music. what is the value. started so late I begin by outlining my plan. A Fackel "=" Torch. which affects both language and speech. for laymen. it's immensely important and we must clearly be agreed about it that it would be foolish to set about dealing with this material. the material that they are constantly using. so to speak. and we can treat this as a starting point. of getting involved with these disciplines that are self-evident to the musician? What value can it have? Here I want to refer to Karl Kraus' essay on language in the last issue of Die Fackel.THE PATH TO THE NEW MUSIC I think we go about it this way was originally it eight times to discuss things. but it is all the same. as if the value involved were aesthetic. which we handle from our earliest years. " Here is Karl Kraus: The practical application of the theory. what is the point. From 1911 onward Karl Kraus wrote all of it himself. so long as they are alive " In the last sentence he even says about language." . at the risk of boring the nothing I can do about that. then. What he says is that our concern with language and the secrets of language would be a moral gain. This guarantee of a moral gain lies in a spiritual discipline which ensures the learn to serve her!" * ' Viennese periodical published by Karl Kraus. I want to take as broad a view as possible and my first question will be this. Karl Kraus says in this essay how important it would be for people to be at " home with and able to talk."* Everything in it can be taken literally as applying to music. As it has all to last three months we shall be meeting I expect many of you have no professional contact with music. and with it the sphere whose riches lie beyond what is tangibly useful. But perhaps they will be interested. would never be that he who learns to speak should also learn the language. not language. I want my lectures to take these " " there's better-informed people into account. Not. but that he should approach a grasp of the wordshape. too. We must say the same! We are here to talk about music. and that I should talk to you as laymen. it first appeared in 1899 and existed for over thirty years. for a layman (of course I take for granted that musicians already know it all) so. because we want to be artistic snobs and dilettantes. Let man Kraus says and note this very carefully. and which are convincing.. can only aim at proving these rules to some extent. in a broader context. . In the introduction to his Theory of Colour. Nothing would be more foolish than to suppose that the need awakened or satisfied in striving after perfection of language is " It is better to an aesthetic one. these elements." So it goes on. I quote them so that we shall be at one about our basic assumptions.. You see. That is to say. it does " not come about as I want to paint a beautiful picture. that happens toobut it's not art. the plans behind her pitfalls." Goethe speaks aphoristically ** of the impossibility of accounting for beauty in nature and art . We want to sense laws . one would have to know them.utmost responsibility toward the only thing there is no penalty for injuring for all language and which is more suited than anything else to teach respect the other values in life . that in itself will have achieved something positive." But Goethe sees this as almost impossible but that doesn't make it less of a necessity " to get to know the laws according to which nature in general. that what we regard as and call a work of art is basically nothing but a product of nature in general. in the particular form of human nature. but that it is all the same. *' What was that? Goethe sees art as a product of nature in general. to be spiritually I involved! Now do you begin to see what What we of music.. Here I want to quote to you some wonderful lines by Goethe.. ladies and gentlemen. What is this "nature in general?" Perhaps what we see around us? But what does that mean? It is an explanation of human productivity." Now 10 . with the riddles behind their rules?" Just this: to teach them to see chasms in truisms! And that would be salvation . to me at least.. and based on rules of order. or I'm getting at? discuss should help you to find a means of getting to the bottom let us say the only point of occupying yourselves in this way is to get an inkling of what goes on in music. there is no essential contrast between a product of nature and a product of art." And so that we do not imagine we can learn to command: "To teach people to see chasms in truisms that would be the teacher's duty toward a sinful generation. poem." " What value can there be in laymen getting involved with said earlier.. and.. you're able to look at certain manifestations in present-day music with a little more awareness and critical appreciation. than of commanding her. which we shall be carrying out." . what it is. what art of any kind is. sentence after sentence! dream of plumbing the riddles behind her rules. particularly of genius. And if. write a beautiful and so on and so forth. which could not be more general. taking the particular form human nature. Yes. which must be fundamental to all the things we shall discuss. tends to produce and does produce when she can .. Perhaps for the moment is therefore music too. I prefer to speak quite generally and say all art. and our whole investigation of this material. when I've drawn your attention to various things. we must strive to discover the laws according to which nature. To put it more plainly. He spoke of the art of antiquity: These high works of art were at the same time brought forth as humanity's highest works of nature. here is necessity. What I mean by that must be clear to you from those Goethe sentences. at the mystery they contain. No trace of arbitrariness! Nothing illusory! And I must quote still another " passage from Goethe. Basically this is the same as colour and what I have said about it. Perhaps that's enough for the moment to show you my point of view and to convince you that things are really like that. that music is natural law as related to the sense of hearing. here is God. It's natural that when one approaches and looks at and observes great works of art." Since the difference between colour and music is one of degree." but that it is a matter of natural laws. But some of them have already been our craftsman's method. whether as believer or unbeliever. not of kind. about music: hi the craftsman's if he is to be capable Another quotation from Goethe." is productive. man is only the vessel into which is poured what "nature in " wants to express. imagines. You know Goethe wrote a Theory of Colour ". believing. with which art has to do. in the same way one has to approach works of nature. " in its particular form man.. we have not yet even given a definite explanation of what colour in fact is . But it is surely the truth. one can say that music is natural law as related to the sense of hearing. are not " aesthetic. specific and applied in what I like to call To be method with which the musician must concern himself of producing something genuine. which is why I say that if we are to discuss music here we can only do it while recognising. recognised. Everything arbitrary or illusory falls " " away. he tries to fathom why it is that everything has a colour. because it expresses our line of thought " so wonderfully.. We shall again have to strive to pin down what is necessity in the great masterpieces. one thing must be clear to us 11 . Here again there is nothing left but to repeat: colour is natural law as related to the sense of sight. more's the pity.. and probably not all of them are in fact discoverable. with the necessary awe at the secrets they are based on..And the works that endure and will endure for ever. according to true and natural laws. Here laws exist. one must approach them. and so on . And he " But perhaps those of a more orderly turn of mind will point out that says. And this leads us to the view that the things treated by art in general. You see I would put it something like this: just general a researcher into nature strives to discover the rules of order that are the as basis of nature." Humanity's works of nature the same idea! And something else emerges here: necessity. that all discussion of music can only take place along these lines. But whether we have yet recognised it or not. cannot have come into being as humanity. the great masterpieces. To speak more man concretely: whence does this system of sound come. in fact. You know that every note is accompanied by its overtones an infinite number. The note. namely. that we cannot conceive these laws differently from the laws we attribute to nature. a word about the title of my lectures. so How everything that has developed since the days of Greek music up to our own time Western music uses certain scales which have taken on particular forms. music that appears as something never said before. then in the next octave the third. As you know. and it's remarkable to see how man has made use of this phenomenon for his immediate needs before he can produce a musical shape how he has used this thing of mystery. in referring to Goethe's views. I must get down to practical matters and treat something of a more general. So new music would be what happened a thousand years ago. natural law as related to the sense of hearing. which uses wherever musical works exist? has it come about? Now. I don't know whether this is so well known to you all. "The path to the new music. So * " Style Neue und veraltete Musik oder Stil und Gedanke " (New and Obsolete Music or and Idea). How did these scales come about? They are really a manifestation of the overtone series. is today So we want to fathom the hidden natural laws in order to see more clearly what is going on today. new music is that which has never been said. because otherwise we shall misunderstand each other and because it follows directly on what we said earlier. but musical nature. touch on something quite general." Were any of you at Schoenberg's lecture?* He. but I should like to discuss it with you: how did what we call music come about? How have men used what Nature provided? You know that a note isn't a simple thing. This implies that the latter note has the same relationship with the one a fifth lower. Then we shall have covered the path to the new music. then the church modes of bygone ages.that rules of order prevail here. the octave comes first." And perhaps then we shall know what new music and what obsolete music is. that is to say it has the strongest affinity with the tonic. then the fifth. spoke of "New music. know of the Greek modes." What did he mean by that? Did he want to show the path to modern music? My own remarks take on a double significance when related to Schoenberg's remarks. but something complex. what new music really is. isn't it? So already we ought really to start looking here for rules or order. Western music mean We sive note. Enough of talking about art let's Now talk about nature! is What the material of music? . and for the ways the rules of order manifest themselves. . What is quite clear here? That the fifth is the first obtruI far as we know. given by Schoenberg to the Vienna Kulturbund in January 1933. just as much as what is happening Now now. the seventh. 12 . . too. But we can " follow the course of things through the centuries and we shall see also say. and if you go on. only intellectual and philosophical. yet he made the stupidest possible judgment he preferred Rossini to Mozart! When a contemporary is concerned." Again. I don't mean the broad masses. so as to see ever more clearly. " " In Parsifal Wagner switched to different spiritual territory. corresponding to a musical shape. We shall try to work things out among Last time " ourselves. equilibrium there is a balance between the forces pulling upwards and downwards. parallelogram of forces. Wagner was not musical. You scale I don't understand Japanese and Chinese music. I want to take a look at blunders by great minds! You'll have noticed already what a remarkable attitude to His ideas about music were unmusic Schopenhauer had. Goethe. " out from Karl Kraus' (he could also have word-shape " " said or linguistic form linguistic shape "). precedented. for example. blunders are easier to forgive but he was dealing with things long past a historical error. Goethe's famous meeting with Beethoven was certainly not as it's usually described. he was not a "crazy fool. to show how important it is to treat all this. all illustrious names! Nietzsche again. we set " if Here I want to digress a little. not our seven-note one. Our seven-note can be explained in this way. (February 20th. when they are not an imitation of our music. Nietzsche! his temper. So we get beyond material and arrive at a grasp of musical ideas. one is to appreciate musical ideas. who haven't much time for things of the mind. It's quite remarkable how few people are capable of grasping a musical idea. for example. So the overtones of the three closely neighbouring and closely related notes contain the seven notes of the scale. Nietzsche. here we have a kind of G see: as a material it accords completely with nature. Goethewhat did he like? Zelter! Schubert sends him the "Erl-King" he doesn't even look at it. for Beethoven knew his " way about in society very well. and Nietzsche his contact with 13 . in fact! And again. I should like it to be our practice that each time someone should give a brief summary of what we discussed last time. But the special consistency and firm basis of our system seem proved by the fact that our music has been assigned a special path." Of course he lost wild man. but we shouldn't imagine he was a Schopenhauer. Now the remarkable thing is that the notes of Western music are a manifestation of the first notes of this parallelogram of forces: C (GE) (DB) F (CA)." " is produced. These have different scales. and we may infer that it also came into being in this way. We shall then be able to take up more consciously where we left off. Other peoples besides those of the West have music much about it. 1933) n If we go on meeting. We see how hard it obviously is to grasp ideas in music. Take his well-known aphorism about that washes against the shores of thinking. how much I revere him but here he is music constantly making mistakes. As you listen to me now. and saw Bizet " " man. more valuable one. a single part a folk tune. so absurd that Karl Kraus should have gone wrong in this way! What's the reason? Some specific talent. you must be following some logical train of thought. Obviously he was forced to find a substitute. It's always the same: mediocrities are over-valued and great men are rejected. he identified the Valkyrie with Nora and he couldn't stand Ibsen. I know how to tell a banal idea from a loftier. I recognise whether I am faced by a vulgar. The Catholicism of Parsifal was the official reason for the you see. hidden there was a wild and woolly man called it was long ago I remember it. There was even something of his printed " Die Fackel. If I sing something simple.wouldn't have as the split it. a thought. a blue sky or something of the sort. without a deeper dimension. Otherwise these exceptional minds wouldn't have gone wrong! But it was precisely the ideas that they didn't understand. something extra-musical. But someone of that kind doesn't follow notes at all. this sort of nadir is unthinkable. And there was a further confusion of ideas. Herwarth Walden. They didn't even get anywhere near them! Again. I needn't say what Karl Kraus means to me. So how do people listen to music? How do the broad masses listen to it? " moods " of some Apparently they have to be able to cling to pictures and kind. Strindberg! Have you read what he says about Wagner? That all good passages are stolen from Mendelssohn. For a man like Nietzsche surely weighed every word he said and wrote. made propaganda for Kokoschka. then they are out of their depth." The most miserably amateurish stuff. banal idea that has nothing to do with whether it's a well-known idea . seems to be necessary if one is to grasp a musical idea. a musical idea there? For anyone who thinks musically. at least and that's where I hope to help you a little there's no doubt what's going on there. That whole sentence from Karl Kraus about " the shores of thinking " is so typical! Surely it's meant dis** a vague mess of feeling " paragingly. and also composed. or the " Tristan that the musical idea takes up only shepherd's melody from a little space. which one must have got from somewhere. doesn't everyone realise that there's M a theme. not a trace of music in or musical ideas! Yet Karl Kraus printed it! in If It's " we compare notes with the visual arts. his But most recently Karl Kraus! This is an interesting problem." This shows clearly that he is quite incapable of imagining that music can have an idea." a melody. Since he was talking about music he should not have let anything extra-musical make him break with Wagner. Is what Bach and Beethoven wrote "so 14 . If they can't imagine a green field. who was a great admirer of Karl Kraus. related notes form the notes 15 . Then This human I recall nature." Concretely. in the particular form of I art. "What ing. rightly that corresponds to the theory The laws of musical so values? " moral gain. Yes nobody wanted to be. Where something special has been expressed. My regarded as. one of his superiors asked him. then one's bound to find one's relationship to such minds entirely changed! One stops being able to imagine that a work can exist or alternatively needn't it had to exist. believe in?" You'll know what I mean by these " directions. And why is it important to take this into account? Look at the music of our time! Confusion seems to be spreading. the only question is whether the present time is yet ripe for them. Ever subtler differentiations can be imagined. in surprise. laid down by the nature of sound. So. not with all these dissonances you get nowadays!" For we find an ever growing appropriation of nature's gifts! The overtone series must be time. practically speaking. So it's from the " parallelogram of forces " of the three adjoining." But that comes later! Or." I repeat: the diatonic scale wasn't invented. "Would you perhaps be the composer. direction should we go in. notes are natural law as related to the sense of hearing. This is the one path : the way in which what lay to hand was first of all drawn upon. So nothing could be more wrong than the view that keeps " cropping up even today. by any " chance?" to which Schoenberg replied. as it always has: They ought to compose as they used to. as you have heard. But the path is wholly valid. a saying of Schoenberg's when he was called up. unprecedented things are happen" So there is talk of directions. complex a complex of fundamental and overtones. there has been a gradual process in which music has gone on to exploit each successive stage of this complex material. infinite. rather. Art is a product of nature in general." The second gets quoted Goethe to you. " That's the moral gain.round and about ideas? What of language which Karl Kraus form-building! is it. to give you a better idea of my approach to is why: so that you should recognise the rules of order in art just as in nature. centuries always had to pass until people caught up with it. Last we looked at the material of music and saw this rule of order. perspectives this opens! It's a process entirely free from arbitrariness. So we should be clear that what is attacked today is just as much a gift of nature as what was practised earlier. then what lay farther off. and its corollary was very simple and clear: the overtones given. a note is." When one thing Karl Kraus starts from is the an inkling of the laws. and from this point of view there's nothing against attempts at quarter-tone music and the like. so I had What to volunteer for it. Now. it was discovered. constant concern is to get you to think in a particular way and to look at things in this way. which goes on developing further. could be felt as a spice. Schoenberg was the first to put into words: these simple complexes of notes are called consonances. the disappearance of which so provokes people. only one of degree. Naturally that's nonsense. of the first primitive relationships that are given as part of the structure of a note. closest relationship just the most important overtones. Yet another thing which. can not have been otherwise.of the scale. obviously rules of order soon appeared. But the way one looks at it is most important. What necessity? To say something. So we this. The triad. In this sense music is a language. which starts with accusations that he uses dissonances too much. As regards the presentation of musical ideas. shall try to How have musical ideas put our finger on the laws that must be at the bottom of been presented in the material given by nature? 16 . not thought up that form the diatonic But what about the notes that He between? Here a new epoch begins. express someIt thing. plus the second one tion of these overtones. Why all the work. that's the battle music has waged since time immemorial. by means of notes. scale. However. and which has played such a role in music up to now: what. because the entire realm of possible sounds is contained within the notes that nature provides and that's how things have happemed. then. if one could say it in words? We find an analogy in painting: the painter has appropriated colour in the same way. in the last quarter of a century the step forward has been a really vehement one. another step up the scale. but it was soon found that the more distant overtone relationships. those that are in the something natural. So for what purpose have men always used " what nature What stimulated them to make use of those series of notes? provides?" : There must have been a need. that couldn't be said in any other way. We do not know what will be the end of the battle against Schoenberg. Such rules of order have existed since music has existed and since musical ideas have been presented. But we must understand that consonance and dissonance are not essentially different that there is no Dissonance is only essential difference between them. But anyone who assumes that there's an essential difference between consonance and dissonance is wrong. so far as I know. It tries to tell people something. is this triad? The first overtone that's to say a reconstrucdifferent from the fundamental. which were considered as dissonances. But something else is just as impprtant we have already spoken before about musical ideas. some underlying necessity. So it's and we shall deal with it later. and an imitation of nature. and of a magnitude never before known in the history of music one need have no doubts about saying that. That's why it sounds so agreeable to our ear and was used at an early stage. to express an idea that can't be expressed in any way but sound. It's an accusation levelled at everyone who has dared to take a step forward. for what we call music to have arisen. a start." So we extend the meaning. flat surface also makes comprehension impossible. Ideas. Things alter if something at least is given. then you have grasped it. you comprehend it. (27th February. '* what does the actual word comprehensibility of something. So a smooth. whose outlines I can make out. 1933) m Today my mind is not entirely on the subject because of a case of illness. the ever-extending conquest of the material provided by nature. been striven for. Proof: the triad. step further. Everything that has happened. have a special word for this: comprehensibility. something comprehensible is something of which I can get a complete view. But what constitutes a start? Here we come to differentiation. We want to talk about the development of the new music. the natural similarity between simple and complex combinations of sound. With this object to try to express an idea universally valid laws are assumed. If I want to communicate something. with the underlying thought that among the various trends which have come to exist there must be one that will seem to us to fulfil what the masters of musical composition have aimed at and striven for since man has been thinking musically. if you take an object in your express? " " hand. go You want to " get hold " a " 17 . What I say must be clear. the way that as it developed the first thing picked on was what lay near to hand. This exploitation of this given material. I mustn't talk vaguely around the point under discussion. which is a reconstruction of the most immediate overtones. clearly as possible. I explained to you the primitive things. What was decisive for these musical events? We discussed one point last time. aims at fulfilling these laws. How Presentation of a musical idea: what is one to understand by that? The presentation of an idea by means of notes. then I immediately find it necessary to make myself But how do I make myself intelligible? By expressing myself as intelligible. What did one say? have ideas been formulated according to musical laws? Here we shall follow this development in its broad outlines. For us the crucial thing is not points of view but facts. Something is expressed in notes so there is an analogy with language. What must happen for a musical idea to be comprehensible? Look: everything that has happened in the various We epochs serves Let's this sole aim. how the diatonic scale was acquired. Clearly this must be the supreme law.We hope this will teach new music can really point us to distinguish as clearly as possible what in the the way. But the second point was followed by the ever more thorough is this: one had something to say. The highest principle in all presentation of an idea is the law of comprehensibility. But if it's a house we " cannot take it in our hand and comprehend it. That brings us to the point in history where we must start observing. and always has been. in primitive folk songs. that there are similar relationships ensuring unity. And the idea of trying to compose anything extra to " clarify " this This is something unique in shepherd's tune would be incomprehensible! later music." divisions for? One could say that ever since music has been written most great artists have make this unity ever clearer. so it must also happen in music. in painting. but at least it gives me an initial approach to differentiation. whence we are to follow the path." Let us sum up what we have broadly discussed." gible. for example. then. Naturally this example." at a time when colossal things had already happened in shows that it was still possible to express so much with a single line even music. and that these things were aimed at by us. is flat surface. for all time. and if you bring in something else as an illustration you mustn't wander off into endless irrelevancies. striven to comin the much-disputed method of composition that Schoenberg calls shall treat this position with twelve notes related only to each other. and unity. But for me the most important thing is to " We show how this path has unrolled. is entirely when we find other things that can be grasped. the most important thing. I can only sense. but I know above all that it's so in language. you mustn't forget the main point. otherwise you are unintelli" Here we have an element that plays a special role: hanging-together. this principle In any case it's possible and conceivable for a musical idea to be presented by only one part. and I believe that in our time we have discovered a further degree of unity. comprehensibility is also guaranteed. to make yourself intelligible. hanging-together. to distinguish between what is principal and what is subsidiary. What are differentiation? Broadly speaking. which is fundamental in our discussions. That's so not only in music but everywhere. the very primitive. Things change What. In Western music monodic song was the rule in Gregorian chant. In the pictorial arts. that's to say the distinction between main " and subsidiary points. And all the rest is dilettantism. Schoenberg even unity. It is clear that where relatedness and unity are omnipresent. for is smooth wall here divided by pillars. meant to write a book "About unity in music. Composition with twelve notes has achieved a degree of complete unity that was not even approximately there before. and we see that. So the whole thing must hang together. differentiation. If you want to make something clear to someone. Unity serving comprehensibility of ideas! In the various epochs of music has been respected in varying ways. not prove. We should and must talk about the space a musical idea can occupy. nothing else. 18 . it was customary at the outset.We mentioned a smooth. And the shepherd's " tune from Tristan. will be necessary to make an idea comprehensible. the introduction of divisions! To keep things apart. But today I want only to deal with one more point." method at the end of these lectures. This is necessary. then. Everything that has happened aims at this. But what idea of art If I've been at pains to make clear to do Hitler. 1933) IV Ever fewer people no. What's going on in Germany at the moment amounts to the destruction of spiritual life! Let's look at our own It's interesting that the alterations as a result of the Nazis affect territory! almost exclusively musicians. they tried to make more room." I mean the group that doesn't aim at external success. Imagine what will be destroyed. After that. that several parts have to be called upon to present a musical idea? To work it out clearly yet again: at a very early stage it was found necessary to bring another dimension into play. there was a rapid flowering of polyphony. wiped out. That can't be chance! It wasn't a matter of arbitrarily adding another part. The idea found it necessary to be presented by several parts. that one part is not enough. for instance? level. The idea is distributed in space. they showed some and they were given their jobs because they were allowed to reach But what will happen next? To Schoenberg. and later on ideas were born that could not be presented in this way. and one can imagine what's still to come. and that's the nature of polyphonic presentation of a musical idea. We shall deal with the principles that governed the gradual exploitation of the tonal field the natural resources of sound. I should like to give you proofs of it. When several parts sound at once the result is a dimension of depth. The first person who had this idea perhaps he passed sleepless nights he knew: it must be so! Why? It wasn't produced like a child's toy. Goering. by this hate of culture! a certain And But let's Goebbels have? leave politics out of it!. What " will come of our struggle? (When I say our. it isn't only in one part one part can't express the idea any longer. Berg and myself (Krenek too). At the beginning ideas could be comprehended by one part. What does it mean. That isn't chance. that's part of the lecture! can nowadays manage the seriousness and interest demanded by art. only the union of parts can completely express the idea. later on it will be impos" cultural sible to appoint anyone capable even if he isn't a Jew! Nowadays " Bolshevism is the name given to everything that's going on around Schoenberg. so that more room had to be found and the single part had to be joined by other parts. absolute necessity compelled a creative mind: he couldn't manage without. though at present it's linked with anti-semitism.) And even if many people who were obliged to believe in it distinction didn't really adhere to the ideology I'm talking about.But let's say it straight away it was soon found necessary not to limit the presentation of a musical idea to one part. (7th March. you the things that 19 . the idea isn't expressed by one part alone. At the very least one's thrown to the wolves. certbelieved ainly. who is the climax and unites both methods of presentation. and the " " concept of accompaniment appears. Now we are not far off a state when you land in prison simply because you're a serious artist Or rather. but I don't know what Hitler understands by pened already I know that for those people what we mean by it is a crime. I ! of whether anyone's there or not it was done in an so difficult to shake off politics. but it was that things would still work out somehow. similar things are to be seen in Jewish ritual. it's hap" new music". and now pay (In passing. that attention! musical ideas had to be presented so that they took in not only the horizontal but also the depth of polyphony. and saw that it's possible to sum up the whole idea in a single part. which is connected with more primitive elements dance forms and the like. dance form. S. so that toward the end of the 17th century it was already at an end. develops. the idea must be disposed of by the one part. It's a great flowering of polyphony. Bach. The moment is not far off when one will be locked up for writing such things. which goes back to Bach and see another we 20 . But during the years when polyphony was still developing ever more richly. Gregorian chant. as it were development of the ideas and principles we arrived unity. but I find it important to take the matter farther in this light. And I added as an example that there was a whole artistic species where musical ideas were presented in only this way. What is it? What are we to under" stand by accompaniment?" I don't know whether all this has so far been dealt with from this point of view. but certain tendencies of the musical functions have to be made clearer. Surely it's remarkable for one person to sing and another to "add something!" So there's a hierarchy: main point and subsidiary point something quite different from true polyphony. We shall consider later how far it exploited the tonal field.) Starting from the view that an idea can be presented polyphonically. the more popular formal type. how history at: comprehensibility shows the and discussed the question of how much space can be assigned to the presentation of musical ideas. How it's growing and changing! A few years ago. we saw changes happening hi artistic production. What do we see growing up now? (It goes as far as J. in an independent melody.) But it was felt that this space had to be expanded. It arose along with the rites of the Catholic We church.must happen irrespective entirely opposite spirit. method of presentation emerging. spiritual Now let's see at the last moment. It's matter of life and death. Here again the idea is not exhausted by one melodic line. hour? If not. and what methods it used. In this period. because they're a But that makes it all the more urgent a duty to save life Will they still come to their senses at the eleventh faces an abyss. So how did music evolve in the course of centuries? The Netherland style developed very quickly. With monody. what can be saved. made an economic sacrifice. The final result of these tendencies is the music of our time. But at this point interesting things happened: " " the accompaniment's supplement to the single-line main part became steadily more important. and we can see that the two methods have inter-penetrated to an ever-increasing degree. Let's look back! (Schoenberg's last lecture is the stimulus here). since polyphony was an accepted thing. methods of presentation have alternated. unaccompanied: in fact everything is there that had to be expressed. want to be quite clear that in classical music there is again an urge to express the idea in a single line. But now it's interesting to see how things break up. a period that limited itself to thinking of fine melodies for the voice. and to providing a supplement for the melody. how there was a return to the limits of a more primitive method after the extraordinary achievements of polyphony. and its sequel is that today we have arrived at a polyphonic method of presentation. We set out from the seven-note scale. But the polyphonic epoch was superseded by another which.Handel (though one shouldn't mention the two in one breath!). It's the period that saw an extraordinary widening of the tonal field through a new emphasis on harmony. This is the period when the homophonic style begins. How can we understand the work of contemporary masters from this point of view? It's produced by time further back. This method of presentation reached its climax in the Viennese classical school. of the material! So. but without exploiting true polyphony. There was again the urge to cram the musical idea into one single line. once again. when opera developed. for 21 . to achieve ever firmer and closer unifying links between the principal melody and the accompaniment. Now we are further on in time. In the classics there was an urge to compress the entire idea into one line. in the accompaniment. This happened quite imperceptibly. quite gradual and without any important divisions. conveyed by the one part. And it's interesting that the function of the accompaniment strikes out along a new path. One can imagine singing a tune by Mozart. and to add constant supplements in the accompaniment. and now the remarkable thing is that in Bach's time the conquest of the twelve-note scale and at the same time that of harmony were achieved. there was a transformation. at first in a primitive way. since presentation of musical ideas developed either through a single line or through several. the period of Monteverdi. the allimportant factors must have been the ones aiming at presentation in which one part is the most important. stemming from the urge to discover ever more unity in the accompaniment to the main idea that is. ever-increasing conquest We I should like to put it in another way: to take the broadest view. reduced to bare essentials. which in its turn is developed. we cannot create works by the methods of a we have passed through the evolution of harmony. limited itself to a return to single-line melody with " an accompaniment " of course. or one of Beethoven's themes. What's the easiest way to ensure comprehensibility? Repetition. in order to make oneself understood. the most artful things developed. Alleluia with verse in melismatic style (8th mode) We How does that strike you? I said last time that the first principle is comprehensibility! How is it expressed here? It's astonishing. the basis of our twelve-note composition is that a certain sequence of the twelve notes constantly returns: the principle of repetition! Finally. something this simple more general. We've arrived at a period of polyphonic presentation. All formal construction is built up on it. exactly as in Beethoven's symphonies. So let's go back to earlier epochs! First I shall show you something from the monodic period. We must be clear as to how all me from the first. Let's learn this lesson from phenomenon. three sections! The second is different the third is like the first find this in a melody from the 12th century! Already it formulates the whole structure of major symphonic forms. structure. as often as possible. with all the other things that have resulted from the conquest of the tonal field. in which the first section is repeated. from the time of Now we Gregorian chant. more often. naturally. how must look at some examples and see how things have happened. and the parts also contain repetitions. these principles have been realised. play the piece again you see. 1933) 22 . all musical forms are based on Let this principle. We primarily a symmetrical A-B-A shape. and if you like we can jump to our own times. this comes about. this idea (14th March. the way all the principles already show up here! What strikes us first? The repetition! find it almost childish. what it expresses: it is such as one knows from one's own body. to create a shape that's as easy as possible to grasp. and our technique of composition has come to have very much in common with the methods of presentation used by the Netherlander in the 16th century but.the inter-penetration of these two methods of presentation. The task was So we have a three-part from our example: of saying something twice. the so-called leading-note. We see how the tonal field gradually Do you know how this scale was used.Last time we dealt with the various epochs during which the role of musical space has varied fundamentally. showed a constant alternation between and more modest demands on musical space. C on B Hypophrygian. The special thing about the Ionian mode our major was that it had a semitone. what did this period do for the principle of comprehensibility? We comprehensibility must look at all this from two standpoints. This is the framework everything else that helps the principle of comprehensibility is arranged round greater it. on the other that of the conquest of the tonal field. history also The monody of Gregorian chant was followed by a period of polyphony. The seven-note scale starting on C is Ionian. apparent not only in the Netherlander but also in Palestrina and the German masters of the time. so they always contain this scale. in what forms and shapes? I mean the church modes. They started on particular notes of the scale. What are the church modes? How did we pass from the modes to the diatonic scale? The church modes are built on each step of the seven-note scale. on F Lydian. The second point we dealt with was the combination of presentation methods in particular historical periods. the one on Dorian. before the was soon found that an ending with leading-note and tonic is especially recurrence of the tonic C. on D G A Aeolian. on the one hand. A text. It 23 . Now. We saw that there were epochs when one type of musical presentation was expressed differently and to a greater degree than in others. on Mixolydian. That was the time when the diatonic scale developed out of the church modes. on E Phrygian. On the other hand. that of and unity. End of a Rondeau by Jehannot de 1'Escurel three-part song on a French covers the whole diatonic scale. and that's why the semitone also came to be introduced before the recurrence of the tonic in the other modes. f From a three-part song motet by Guillaume Dufay We see third is that it ends on one note.effective. End of a 24 five-part tenor motet by Senfl . which already ends with the third. Another piece ends on the open fifth. So you see again that this is all entirely in accordance with nature. So the decline of the modes happened through the addition of leading-notes foreign to the mode. The missing there was neither major nor minor. In our example we still have the seven notes. called accidentals. nobody trusted himself to use it Now an example from the 16th century. Bach began. By then the additions had already gone so far that all twelve notes of the chromatic scale could be used. But this meant that the modes condensed into two groups major and minor and that was the end of them. whose essential difference lies in the third. At the moment when only major and minor were left the period of J* S. namely an extra note. but at the moment when the authentic seventh was replaced by the " ** . something was there that led sharpened one hence the name accidental to chromaticism. by Ludwig Senfl. there could be no major or minor the third was felt to be a dissonance. as I'd like to go through with you the forms produced by the urge toward the clearest possible presentation of ideas! It's the next epoch that gives us an " how can the principle of repetition be ask ourselves. but What can we conclude altering the direction of the intervals (inversion). fourth and sixth parts sing the same thing. The fact that they sing the same thing at different moments makes unusual cleverness unity. but beginning on a different degree of the scale. The climax is surely found in Beethoven.This contains the essential points in the exploitation of the diatonic scale. The repetition of motives and the ways in which it was managed these we find in the next epoch. That occurs as one line unfolds. slightly altered conditions. Initial imitation! Resourcefulness soon went further. is it possible for several parts to sing the same thing one after the other? That's the essence of canon. in the sense that the various simultaneous parts are not unrelated. find traces even in early when the idea is carried by a single line?" applied polyphonic music. Here we see the beginnings of polyphony based on this principle of repetition. However. But then the following also happened the series of notes was repeated. like Now We We A music example must be missing here. not only rhythms are repeated. from Bach till the development of the classical forms. insight into this. and a let's look at this epoch from the other point of view! What do we find as regards the presentation of ideas? In dealing with Gregorian chant I've already pointed out that the principle of repetition is enormously important in Now enhancing comprehensibility. sequences a certain rhythmic succession is repeated. in the interests of comprehensibility. precisely in necessary. is But the reason The successive entries order to create a relationship. How is it in this example?* Something can be repeated in the same way or a similar one. hesitant attempt to end with the third which means an approximation to major and minor. something can be the same but under when the line is turned backwards (cancrizan). At first it isn't an exact canon. since Webern refers to six parts whereas the Senfl passage is in five. How always the urge toward the greatest possible meant that the opening motive took on greater importance. but at the outset there was always the need for each part to enter as the preceding one had done. By "motives" we mean. but the whole course of the melody. the closest conceivable relationship between several parts. Earlier we said that unity was at first produced through inversion and Here we have the reversal but then we hadn't begun to discuss rhythm. 25 . Now we see how all that developed along these lines is based on this principle. a relationship is produced among them the third. But what happened here? The repetition of motives. primeval form of the motive. from this? What are we to make of it? We already see in this epoch that composers' every effort went to produce unity among the various parts. This period form. less here than in Gregorian chant! easy to grasp. Why is it is this so simple? Because it's simple repetition. S. this way of shaping the melody and the layout of the notes. since there possibility of repetition it's been exploited in various ages to express as as possible to much accommodate a rich store of musical shapes. and a form J. But the period. Bach. we recognise one? Because it's repeated! But how do We see something similar in tion. Sarabande 26 . on the other hand. as demonstrated here. 5th English Suite. provides one of the most important forms in which a musical idea can be presented. such as occurs above all in folk song. can be constantly folk song. If as a contrast I Gregorian chant. But soon the need was felt to shape things still more of thematic structure arose that's rather like this: artistically. Here we find period form. is only one of the forms in which an idea could be presented along these lines construction of melody and is in fact the more primitive one. and the most unprecedented ideas were later expressed through this form. to introduce order. But.Schoenberg. the smallest independent particle in a musical idea. everything much more amorphous. The urge to felt in produce order. everything is based on repeti" Kommt ein Vogerl play the quite banal melody " geflogen how much firmer the shape of everything is is There. beneath it all is the urge to express oneself as comprehensibly as possible. this remarkable course of events that what we saw in polyphony. I've gone in a certain direction. (20th March. and now we find this process. Last time we looked a little at the Netherlands school it's a long way from there to the present! But you'll see that it all unrolls surprisingly smoothly. the development of forms in which presentation of musical ideas calls for a single line. The way this happened was that motives were developed.Unlike the period. We Last time I talked about the period and the eight-bar sentence. something new could and had to follow at once. immediately repeated. It can be seen that these forms have gone on providing 27 . shall talk about this next time. so instead of four bars' antecedent and four bars' consequent there are two bars immediately repeated and since there was immediate repetition. 1933) VI haven't so much time left and must see that we get to the end of the There are three lectures left. The period and the eight-bar sentence are at their purest in Bach. but there was a deeper problem involved. and I want to go on and show you how this presentation was perfected. in his predecessors we find only traces of them. especially in Beethoven). hut one of only two bars. matter. this isn't a four-bar structure (the normal form. In this connection we've talked about forms. the greatest possible unity. So we see that even in the fullest and purest musical structure we can find quite simple forms. But development is also a kind of repetition. and that a new polyphony is developing. the highest flowering of polyphony was reached with the Netherland school. Not even in Haydn and Mozart do we see these two forms as clearly as in Bach. and here I want to say only one more thing. Why did recognise clearly: what is expressed here? as they did? Now. Everything that came after Bach was already prepared for. that's to say the so-called Netherland technique that this tendency is again gradually taking possession of these these forms And now we must come about We things. But everything can be traced back to them. the same thing twice. the basis of all thematic structure in the classics and of everything further that has occurred in music down to our time. And these two forms are the basic element. It's a long development. and it's often hard to make out those basic elements. and later we see all this polyphony come to an end and be replaced by something quite different. Brahms and Mahler is also looked at two examples from the great days based on these forms. for about a quarter of a century. major Now we must look at the further So how did major and minor come to be superseded? As in the dissolution of the church modes. and this scale bebasis of structures that led beyond the church modes. "mode"=*"Tonart ". ambiguous chords appeared. Here indeed the remarkable thing is that the need for a cadence was what led to the preference for these two modes.B. until a time had been reached when these wandering chords were the ones most used.the basis for all construction of themes. and the world of our major and minor genders* emerged. gender ** normally also be translated as mode ". L. a new music has existed that has given up this " " double gender in its progress toward a single scale the chromatic scale. the days of the style simultaneously see the of the development from diatonicism to chromaticism twelve notes. were predominant down to our time. as well as being used in this way at the end. conquest of the tonal field! The two tonal and minor. it will throw light on the music We of our day. It was then transplanted to the other scales. The cases are quite analogous! As part and parcel of this " " cadence came urge to define the key exactly. So accidentals spelt the end for the world of the church modes. and the moment came when the keynote could be given up altogether. the need for the leading-note that was missing in the other modes. so that they became identical with the two enduring ones. marked by the rise of the Italian opera. the very end of a piece the to contain a number of chords that by their nature couldn't be clearly related to one single key. Major and minor finally became established during this period. beginning to the conquest of the recapitulate: first men conquer the seven-note scale. " * In the original. So when did all this come about? Let's first discuss when and where the major-minor genders became established. that's to say. which would 28 . Wandering. It was the time after the Netherland school. the destructive elements came of the urge to find a particular type of ending. too ! saw how the conquest of the tonal polyphonic field gradually came about . the epoch I've mentioned several times already. because everything that happened after the high classical period particularly in Schumann. "="Geschlecht". genders. of polyphony and saw let's say this quite clearly. And now we see how gradually two of these scales come ever more to the fore and push the To came the others aside: the two whose order is that of present-day major and minor. but now. and they were also introduced in the course of the piece. So the course of the piece became steadily more ambiguous. dominants were produced on " inter-dominants. von Gpfffraf SA /6* ge- r 'rJCirr >r 'i r r . S. Through this cadential history repeated itself in major-minor tonality. Here already is a piece wholly based on what we call chromaticism. We must sum up again. r * - ii r r r ge- AW* .S /- -/- I ^^ J. as one tried to end in an ever more complex way. mediant and subdominant and between the leading-note and tonic. on progression by semiThe semitone was indeed also there in diatonic music. Notes were introduced that didn't belong." and this has already each degree of the scale so-called happened in the chorale arrangements. and it was function just there that the dissolution had begun. eta* etas hoi. and this led to the situation where major and minor were done for. between the tones. more interesting shape to major and minor. Let's look at the other point. at the cadence. the ? I've already mentioned presentation of ideas! What happened in this epoch 29 . 1 u r r r cfes ^/o/ zetcfi- /rtf/ etas -1 ' ^^ eJ 9*/ AflCWT 1 I u > f J Hal- 4I/7. That's how it happened again hi our time. once again accidentals. to give an ever richer. only note how it became possible for all the things of today to happen. of the world in which the twelve notes hold sway. From there one ranged ever farther abroad. chorale. Bach. the conquest of chromaticism came about in the same way as that of major and minor. or rather it's already there. " Christ lag in Todesbanden " make of this? What has happened? What plays the main mustn't look at it aesthetically. once again at the end of the What are we to role here? We piece. It is the emergence.. until the new accidentals came to predominate. we drew on chords that were steadily farther removed. that immediately before Bach polyphony broke off and the development of a melodic type of music ensued. Already this is the form of the eight-bar sentence of the kind found most clearly in Beethoven. that can be described as an is eight-bar sentence. found it's remarkable to observe I've deliberately described the basic already.) mental in further merely to hint at what happens during this development the harpsichord. Bach's melodies and those of his time we find only the seeds of the development that reached its climax in Beethoven. du liebes Herz " Here already we have the essence of the eight-bar sentence in blueprint. Bach. the emergence of instrumental music. Development of an idea can be seen quite clearly. forms not to be In fact in What is a melody of this kind like in Bach ? J. Now. S. The most important point for us is that the forms fundadevelopments grew up in association with these influences. So here again Bach is involved at a vital stage in the development of music. and dance forms also belong here. it's the form most favoured by post-classical music. a figure repeated. (These dance forms become an important influence through their connection with instrumental music. aria " Blute nur. not so clearly even hi Mozart and Haydn. you need only grasp what was aimed at in its use). associated with the opera. in Bach we find not only the organ but the lute. (It isn't so important for you really to understand this form completely. Matthew-Passion. In any case it's been used more than the period. then it's repeated again. with its instrumental accompaniment. I mention this because it introduced elements into music that came from a different sphere and gained a great influence on all the further developments. in contrast to strictly musical thinking: folk music. the period and the eight-bar sentence that a pure culture is until Beethoven. for example. then there are two variations of it. etc. Here something played a role that mustn't be overlooked. I want suite form. Now I want to show you a passage from a Beethoven sonata. 30 . Verklarte Nacht. Curves became longer. v. 2nd subject in E major Here we find the periodic version again*. the first six bars return (in varied form) at the end of the example. 1. 31 . ever more broadly spun * i. whereas later one became freer and left out certain intermediate stages.B. like the links of a chain. But what else plays a part? The fact that repetitions were carried out with ever-increasing freedom one proceeded by variation. 2 No. out. but the forms have been handled ever more freely." Things were more immediately and abruptly juxtaposed. 1st movement Here again we see a figure that's repeated and developed. since the development brought about by means of one motive led ever further afield. ftwas t-uftigcf Arnold Schoenberg. so I can jump to something else without carrying on the development any further.. Beethoven. What is this " freer treatment?" In one case the repetitions are literal and without gaps.e. L. thinking -metaphorically "It's happened once already. Nothing new has been added. which of course makes them harder to understand. The same form is always at the bottom of it. 1st violin.L. Piano Sonata Op. Then on again away with it! What came next? Development of melody." I mean this music. conquest of chromaticism that's only vocal music. natural law as related to the sense of hearing for comprehensibility trying to create ever increases comprehensibility. altered rhythm." Here the element of comprehensibility is important above all to introduce ever more unity! That's been the reason for this kind of composition. cancrizan. destruction of the church modes: on the other hand. major and Instrumental minor. Last time I emphasised 32 . the greatest flowering of polyphony. because I still want to talk about the new music itself! hope you already have a general picture. since everyone has the same thing to say. a new expressive form in association with the folk song. (27th March. which has existed for about twelve " years and which he himself has called composition with twelve notes related only to each other. Now we come straight most recent times. music crept into the picture here. shall see led to the last decade's how these elements have gone on developing and have new growth. First I to the to These lectures are intended to show the path that has led to this music. and make clear that it had to have this natural outcome. So we see ever greater conquests in the field provided by sound as Goethe " " and the urge would have said. with the result that in the late Netherland school a whole piece would be built out of a sequence of notes with its inversion. just because unity Now we far. the very style. in headlines: diatonic scale. in relation Let's again I sum But to form. or is consciously opposed to it and thus uses a style we don't have to examine further. playing on instruments became an art. and here I want to say expressly what new music I want to discuss: the music that has come about because of Schoenberg and the technique of composition he discovered. more unity. for everything else is at best somewhere near this technique. since it doesn't get beyond what was discovered by post-classical music. 1933) vn Today let's recognised as most important tion of ideas! examine the new music with an eye to the two factors we've the conquest of the tonal field and the presenta- want to talk about the presentation of ideas. Beethoven. And the conquest of the tonal field? After the classics the break-up of tonality. and only manages to do it badly. one to another. through ever-increasing unity. composition with twelve notes related only It's the final product of the two elements we've observed so People are wrong to regard it as merely a "substitute for tonality. The greatest strides have been made by the very music. More unity is impossible. that Schoenberg introduced and that his pupils have continued.up. So once again. etc. composers all began striving to create forms that made it possible to express their urge for clarity. and a Schoenberg theme is also based on those forms. which arose at that time and became the most subtly worked and richest movement of the cycle. too. is based on * Ger. Beethoven used it particularly in his Adagios.* which turned into the rondo. I remarked recently that instrumental music arose with the homophonic style of the Italian opera. at the beginning of the seventeenth century. Brahms. developments since Beethoven the eight-bar sentence has been used more. then. the true sonata movement. the period and eight-bar sentence. etc. Certainly a Mahler symphony is put together differently from one by Beethoven. the presentation of ideas. the Air. from Bach's predecessors to Beethoven. 33 . Later. " Kehraus ". it isn't easy to relate pieces to those formal types. The modern symphony. Bruckner. forms later manifest in the symphony. Mendelssohn. but in essence it's the same. Gigue. Most of these forms were later cast aside and there remained only the Scherzo (which Haydn still often called a Minuet). insofar as it uses self-contained numbers. all makes use of these forms. but they are there all the same. and indicated the forms that developed in connection with the popular type of dances and so on. and the light final movement. What happened then. that everything which has happened back to these forms. The period derives more from song. the kind created by Schoenberg. So the development lasted about two hundred years. but that the other factor was also present. that this new one thing I said the other day that after the First. It's a fact. they are the cycles that have developed in classical symphonies and chamber music. So in a sense it Now we find that hi derives from what's most generally comprehensible. Sarabande.. Here I'm thinking particularly of Suites by Bach's forerunners and Bach himself. This led to the development of those classical forms that found their purest expression in Beethoven. They are the forms in which principal subjects are cast. That's why it was important for me to concentrate my remarks on these two factors. or what is borne in mind in order to present ideas. after Beethoven. the presentation of ideas! Netherland polyphonic style had passed its climax. just as our music does. with Minuet. the music of our day. the Air. for instance in Brahms. that's to say Schubert. and nobody can disprove it. which is transformed into the Adagio second movement in Beethoven. headed by a prelude and with a Here we already see the main traits of the song-like movement. is the direct result of only the development of the tonal field and its ever-increasing exploitation. Mahler. We must be clear about what happened here: the aim was always presentation of an idea. the period and the eight-bar phrase. Beethoven concludes the development of these forms in which ideas were presented. and they are the forms that occur in opera. since then can be traced But one movement is still missing the first.music. Schumann. one form of presentation was particularly developed the fugue. What. The last few years have tried rather to adhere very strictly to these forms. and nobody racks his brains to find anything new. desire for maximum unity. in the interests of comprehensibility. then. We've also referred to Bach in connection with the enrichment of the tonal For everything happens in Bach: the development of cyclic forms. is implied in the presentation of an idea? An upper part and its accompaniment. Nothing was to fall from heaven everything was to be related to what was already present in the main part. Forms were the result of this distribution of space. I should like to mention some of these." to derive things and partial forms very soon an attempt to remain was from the principal theme. And here the classical composers often arrived at forms " " in their canon and imitation. that recall those of the old Netherlander I should also point out. To put it schematically. the field. development of the motives contained in the shapes of the upper part especially expanded." a work that goes conquest of the tonal wholly into the abstract. which is constantly transformed: a thick book of musical ideas whose whole content arises from a single idea! It's What does derived all this mean? The idea. Everything is is from one basic from the one fugue-theme! Everything 34 . For the fugue derived from instrumental music. structure that arose absolutely from the urge to create a maximum of unity. I've spoken of the development as the part of the work specially created so that the theme *' could be treated. music lacking all the things usually shown by notation no sign whether it's for voices or instruments. with it all. rather late. We have already frequently mentioned the effort to achieve an ever tighter unity. Classical composers* symphonic form also resorted to this.these forms. In fact there was " thematic. staggering polyphonic thought! Horizontally and vertically. how does this happen? By repeating the theme in by introducing something that is the theme unfolding not only horizontally but also vertically that's to say a reappearance of polyphonic thinking. and in his own works. and. so it's very remarkable that what we know as fugue didn't in fact exist at the time of the Netherlander. everything is derived from the theme. quite aside Here a polyphonic form of musical thought developed from vocal music. that in Bach's time. But in fact that has only just become possible again. field. This is a various combinations. How has this urge made itself felt since the tune of the classical composers? Without theoretical ballast we could put it like this: at an early stage composers began to exploit and extend to the rest of the musical space the shapes present in the upper part. And here we must return to something earlier! " It's important that Bach's last work was the Art of Fugue." Now. no performing indications. almost an abstractionor I prefer to say the highest reality \ All these fugues are based on one single theme. 44 thematic." section. ment And now we find this creeping This now became the arena, " " into later forms, in the developas the fugue was earlier. The thematically gradually shows itself in the accompaniment, too; an alteration, an extension of the original primitive forms has begun. So we see that this -our type of thinking has been the ideal for composers of desire to work (Wagner's leitmotives are perhaps another matter. For example, the Siegfried motive crops up many times because the drama calls for it, there is unity, but only of a dramatic kind, not musical, thematic. Naturally Wagner often also worked in a strictly thematic way; moreover he, of all composers, played a great part in creating musical unity linked to that of the drama). all periods. if develop everything else from one principal idea! That's the strongest when everybody does the same, as with the Netherlanders, where the theme was introduced by each individual part, varied in every possible way, with different entries and in different registers. But in what form? That's " where art comes in! But the watchword must always be Thematicism, thematicism!" thematicism, unity plays a special role the variation. Think of Beethoven's Diabelli great composers have chosen something quite banal as the basis of variations. Again and again we find the same desire to write music in which the maximum unity is guaranteed. Later, variation found its way into the cyclic form of the sonata, particularly in Beethoven's second movements, but above all in the finale of the Ninth Symphony, where everything can be traced back to the eight-bar period of the main theme. This melody had to be as simple and comprehensible as possible; on its first appearance it's even given out in unison, just as the Netherlanders started off by writing at the top the five notes from which everything was derived. Constant variations of one and the variations. To One form At times same thing! Let's pursue that! Brahms and Reger took it up. Bach, too, had already written in this way. In fact Bach composed everything, concerned himself with everything that gives food for thought! But the accompaniment also grew into something else; composers were anxious to give particular significance to the complex that went together with the main idea, to give it more independence than a mere accompaniment. Here the main impetus was given by Gustav Mahler; this is usually overlooked. In this way accompanying forms became a series of counter-figures to the main theme that's to say, polyphonic thinking! So the style Schoenberg and his school are seeking is a new inter-penetration of music's material in the horizontal and the vertical: polyphony, which has so far reached its climaxes in the Netherlanders and Bach, then later in the classical composers. There's this constant effort to derive as much as possible from one principal idea. It has to be put like this, for we too are writing in classical forms, which haven't vanished. All the ingenious forms discovered by these composers also occur It's not a matter of reconquering or reawakening the in the new music. Netherlanders, but of re-filling their forms by way of the classical masters, of linking these two things. Naturally it isn't purely polyphonic thinking; it's both at once. 35 hold fast to this: we haven't advanced beyond the classical comWhat happened after them was only alteration, extension, forms. posers' abbreviation; but the forms remained, even in Schoenberg! So let's but something has altered, all the same; the effort and thus to get back to polyphonic thinking, Brahms is particularly significant in this respect also, as I said, Gustav Mahler. " " What about Bruckner and the others?" I should say, Nobody If you ask, can do everything at once.'* In Bruckner it's a matter of conquering the He transferred to the symphony Wagner's expansions of the tonal field. For the rest he was certainly not such a pioneer; but Mahler certainly field. was. With him we reach modern times. All that has remained to produce ever tighter unity Now I'd like to take a quick look at the other point, the expansion of the tonal field! Last time I quoted a chorale harmonisation by Bach, to show that something already existed in Bach that wasn't superseded by the later classical composers, nor even by Brahms: it's impossible to imagine anything more meaningful than these constructions of Bach's! Beethoven and Schubert never did it any better. On the contrary, perhaps they found other things more important. What's the point of these chorales? To provide models of musical thinking based on the two genders,* major and minor, which were fully developed by then! Here I have 371 four-part chorales by Bach there could just as well be 5,000! He never got tired of them. For practical purposes? No, for artistic purposes! He wanted clantyl And yet it was this which sowed the fatal seeds in major and minor. As in " " the church modes the urge to create a cadence led to the semipleasanter tone, the leading-note, and everything else was swept away, so it was here, too; major and minor were torn apart, pitilessly the fatal seed was there! Why do I talk about this so much? Because for the last quarter of a century major and minor haven't existed any more! Only most people still don't know. It was so pleasant to fly ever further into the remotest tonal regions, and then to slip back again into the warm nest, the original key! And suddenly one didn't come back a loose chord like that is so ambiguous! It was a fine feeling to draw in one's wings, but finally one no longer found it so necessary to return to the keynote. Up to Beethoven and Brahms nobody really got any further, but then a composer appeared who blew the whole thing apart Wagner. And then Bruckner and Hugo Wolf; and Richard Strauss also came and had his turn very ingenious! and many others; and that was the end of major and minor. up, I'd say: just as the church modes disappeared and made way these two have also disappeared and made way for a single series, the chromatic scale. Relation to a keynote tonality has been lost. But this belonged in the other section on the presentation of ideas. The Summing for major and minor, so * or " modes "; see p. 28. L.B. 36 It relationship to a keynote gave those structures an essential foundation. helped to build their form, in a certain sense it ensured unity. This relationship to a keynote was the essence of tonality. As a result of all the events men- tioned, this relationship first became less necessary and finally disappeared certain ambiguity on the part of a large number of chords completely. made it superfluous. And since sound is natural law as related to the sense of hearing, and things have happened that were not there in earlier centuries, and since relationships have dropped out without offending the ear, other rules A of order must have developed we can already say a variety of things about them. Harmonic complexes arose, of a kind that made the relationship to a keynote superfluous. This took place via Wagner and then Schoenberg, whose first works were still tonal. But in the harmony he developed, the relationship to a keynote became unnecessary, and this meant the end of something that has been the basis of musical thinking from the days of Bach to our time: major and minor disappeared. Schoenberg expresses double gender has given rise to a higher race! this in an analogy: (3rd April, 1933) vm Today we shall follow the final stage of the development, and first we shall revert to the point about the dissolution of major and minor the disappearance of key. Last time we already looked at some of this when we discussed the I mentioned that even in Bach's chorale starting point of the dissolution. harmonisations tonality was dealt a severe blow. It's very difficult to make the recent final events understandable; but it's important to talk about them because lately people have tried to make out that this state of affairs is a quite new invention, although it has existed for a quarter of a century. I don't want a polemic, but just now there's a lot of talk about this, in connection with political developments of course, and things are made to look as if it were all something foreign and repellent to the German soul, as if the whole thing had boiled up overnight; quite the contrary, it's been stewing for a long time, a quarter of a century already, it's something that's been going on ever so long, so that it's become impossible to put the clock back and how would one set about it, anyway? I don't know whether there was the same weeping and wailing over the church modes: anyway, just at the moment there's a frightful hubbub about tonality. We must get this quite clear, so that you know whether to believe me or not! wanted to show that the process in this case is quite analogous with what happened before. Above all, I say it because recently in the Austrian Radio's weekly, that's to say before the widest possible public, a Mr. Rhialdini he'd I 37 the new chords were themselves altered. C major doesn't contain F sharp. F-A flat-C. and finally all these chords were felt to be natural and agreeable. The augmented five-six chord belongs here. so if I use F sharp in major perhaps as part of the dominant C then I have broken out of the key. to the key. But I don't want to treat it as that. major and minor are also dissolved. Then it went still further the cadential points were what contained the seeds of destrucThe church modes disappeared in an analogous way. who are merely re-writing the old music. Ambiguous chords were produced. and others built out of superimposed thirds. For example. in let's C major the supertonic could be D or sharp. above all. and so we got to a stage where these new chords were almost the only ones used. The ear gradually became accustomed to these complex sounds. then with inter-dominants. first the ambiguous chords. music came to use notes foreign to the scale of the key concerned. Even the so-called " Tristan chord " occurred before Wagner. for example the diminished seventh. and there's no need to discuss the others. . and finally we came to a situation where the ear no longer found it indispensable to 38 . and by analogy tion. also plays a part here. so we could still rejoin the key. nobody has gone beyond our style. Later this happened faster. So then something different was brought into play. Now go further. over. which can be related to our keys. because of the use of these dissonant chords through everincreasing conquest of the tonal field and introduction of the more distant overtones there might be no consonances for whole stretches at a time. " " Dissolution of tonality: in connection with Bach's special type of harmonisation. rather to relate it to the keynote which is destroyed as a result. There was also the development of harmony. which plays a great role in Wagner but isn't really anything so terrible it happens in any minor key as a diatonic chord on the mediant. Suddenly every degree was there twice usage. dissonances. In popular it's a matter starting from C major of using the black keys. which at first only appeared cautiously in passing or prepared. and not with the significance and the kind of resolution it has in Wagner. it was part-writing that led to chords of that kind. but related to the tonic. it has no sharpened notes at all. then one already had the still D twelve notes. for example.be another of those German composers has written that at present people are squabbling over whether tonality should be given up. with the tendency to introduce other degrees with their dominants. such as the augmented triad. when every degree was doubled. How did this happen? The original consonances in the triads were developed into seventh-chords. He may be: we see it quite plain. But we still related them to the tonic. Then there came the fourth chords. then the chords were still further altered certain notes in them were sharpened or flattened. of the dominant The minor subdominant. we don't need to squabble! As I said last time. But ultimately. but only in passing. With these wandering chords one could get to every possible region. too. That is a modulation. So there came to be music that had no key-signature. otherwise it won't be enough to give satisfaction.When is one keenest to return to the tonic? At the end. The ear was satisfied with this suspended " " one felt still in the air state. Suspended tonality. we felt the need to prevent one note being over-emphasised. bound up with the twelve notes. " The piece is in this or that key. that's hard to explain. Now 25 years ago a jubilee. the dominance of the of chromatic progressions. and where for long stretches " it was not clear what key was meant. no less! This moment all Arnold Schoenberg was the man responsible. The links with the past were most intense. But things of this land piled up more and more. only when one has started. But it was soon clear that hidden laws were there. It's just in Beethoven that we find this very strongly developed. to prevent any note's " " of being repeated. too. especially toward the end. on the basis of chromaticism. brought up a particularly tricky point. course. at the end: the whole thing. happened in about the year 1908. the exact opposite became a necessity. What does one make of it? How are we not to repeat? When is a repetition not disturbing? I said the composition would have to be over when all twelve notes had been there. however. Now there was a stage chromatic scale. since there was no tonic any more. That's to say. everything that has occurred. No effort is too great when it's a matter of shaping this ending so that it really strikes home. or by intervals connected with chromatic progression. the tonic is constantly reiterated. 39 . of Then one can say. or rather since matters had gone so far that the tonic was no longer necessary. is to be understood in this way or that. 1933 so in which we it's I can speak from personal experience took part. Now I must carry on the tale from my own experience. it used not only the white notes in C major but the black ones as well. What happens when I try to express a key strongly? The tonic must be rather over-emphasised so that listeners notice. the work would have to end when all twelve notes had occurred. taking advantage Of course composition can't go on without note-repetition. One can also take the view that even with us there is still a tonic present I certainly think so but over the course of the whole piece this didn't interest us any more. and one day it was possible to do without the relationship to the tonic. to put in a more popular way. You mustn't imagine it was a sudden moment. So no note must be repeated during a round of all twelve! But a hundred ** " rounds could happen at once! That's all right. not of the seven-note scale. the ear fouud it very satisfying when the course of the melody went from semitone to semitone. Now. in order to make it stand out enough." But there was still a tune when one returned at the last moment." It only emerged refer to a tonic. Is that all clear? this it's moment. For there was nothing consonant there any more. nothing was missing when one had ended the flow of the complex as a whole was sufficient and satisfying. The chromatic scale came to dominate more and more: twelve notes instead of seven. one looked for a particular form of row to be binding for the course of the whole composition. but they happened theory but by listening. too. to derive everything from one thing. Not only from the fact that we've lost tonality. " " The round of the twelve notes That really expresses the law. only its manifestations are different. That's what we sensed. Nature expresses herself in the " man. and even then something else could be heard at the same time. to work thematically. stalk. 40 . and thematic technique works as before. One put the twelve notes in a special order. And here we come to the salient point pay attention! now you will understand how the style arose. from the point of view of unity." That's what Goethe says. And now let's switch back to the masters of the second Netherland school! Then a composer would build a melody out of the seven notes. For unity I referred to the sum up: Composers tried to create unity in the is completely ensured by the underlying series. on this basis. of accompaniment. that would also have to obey the same law. position with twelve notes related only one to another. And it's Goethe's idea that one could invent plants ad infinitum. accompaniment. comrelated to this scale. And now everything is derived from this chosen succession of twelve notes. There can even be a twelve-note such chords have been written then one could start again. nothing not through more! Some remarkable things were involved. Man has a series of vertebrae. since here. and surely the maximum unity is when everyone sings the same thing imaginable! Let's all the time the maximum unity growth of melody. and so to produce the tightest maximum unity. For example. relationships between things. to whose course the composition was tied. And that's also the significance of our blossom. without any of them being chord repeated. One didn't leave the order What's happened? to chance. His Plant Metamorphosis clearly shows the idea that everything must be just as in Nature. and that can be sensed " " in them. Nothing else at all! But why was it interesting to us that *' the same thing " was sung all the time? One tried to create unity. it was found disturbing if a note was repeated during a theme. But the great advantage is that I can treat thematic technique much more freely. It's always the same. Primeval bone primeval plant. And in Goethe's view the same holds good for the bones of the human body. but always The same happens in Schoenberg's discovery. but in a quite matter-of-fact way. All twelve notes in a particular order and they have to unfold time after time in that way! particular succession A A of twelve notes is constantly there. particular form And what is manifest in this view? That everything is the same.then the other notes of the row must follow it. each different from the others and yet similar. round of twelve notes. This is very akin to Goethe's conception of rules of order and the significance that's in all natural events. root. of composition. And we needn't be afraid that things will manifest themselves with too little variety because the course of the series is fixed. and moreover in a particular order. the basic shape. " his Jacob's Ladder. Now We C We've reached the end! Ever more complete comprehension of the tonal and clearer presentation of ideas! I've followed it through the centuries and I've shown here the wholly natural outcome of the ages. " and we younger composers have been his disciples. father of the West). 1933) * Theodor Haecker. about 1921. one was obliged to return to the tonic." tied himself not to twelve notes but to seven. That's " composition with twelve notes related only to each other. To take one more bird's-eye-view of it all: if this is the outcome of a natural process of sound as natural law related to the sense of hearing! what do we see working through I want to end by quoting a saying by one of the most this development? wonderful thinkers of our time: in his book on Virgil. the analogy has still to be developed. work in field the service of the Almighty. that's to say composition with twelve notes related only to each other. Just as earlier composition was in major. Enough to choose from! Until now we've found these 48 forms sufficient. these 48 forms that are the same thing throughout. Now we base our invention on a scale that has not seven notes but twelve. it didn't all come about in a hurry. 24). when one wrote in C major. Vater des Abendlandes " (Virgil. one was tied to the nature of this scale. But finally Schoenberg Since that time he's expressed the law with absolute clarity. practised this technique of composition himself (with one small exception). 12x4 making 48 forms." What we establish is the law. style How do I arrive at this row?" Not arbitrarily. (A tie of this kind is very strict. 1931. one also felt " tied " to it. starting from the Netherlanders. so that greater blessings!" " a primal blessing shall come to bestow (10th April. perhaps so that as many intervals as possible were provided." Naturally all this had its preliminary stages. But then what can one do with these? on every degree of the scale. " Vergil. Earlier. Leipzig. otherwise the result was a mess. but according to certain secret laws. we write in these 48 forms.* Theodor Haecker " " labor improbus mentions his expression referring to agriculture. can give rise to variants we also use the twelve notes back to front that's cancrizan then inverted as if we were looking in a mirror and also in the cancrizan of the inversion. the ties are only partial. Now I'm asked. in a work he has still not finished and that nobody has seen. I've mostly come to it in associa" tion with what in productive people we call inspiration. 41 . But speaking from my own experience. the course of the twelve notes. just as one enters into marriage the choice is hard!) How does it come about? I can imagine doing it on purely constructive lines. Schoenberg. Even in ** " his Serenade (Op. so that one must consider very carefully and seriously. That can base them makes four forms. It's the only one of the old achievements that has disappeared. What's is music in no definite key. so I had a brief correspondence with " The Schoenberg about what such a lecture should be called. to show how one thing leads to another. tonality has been one of the most important means of establishing unity." Schoenberg gets a lot of fun out of this. But I don't want to trust you with these secrets straight away and they really are secrets! Secret keys. what doors have been opened with this secret key? To be very general. above all. but not ideas that can be translated into concepts A 42 . He suggested I didn't invent the to talk in path to twelve-note composition. in short. title you've seen. ." We must know. For I don't know what the future has in store . Today I want to deal generally with these things. So what has in fact been achieved by this method of composition? What territory. even those who only want to sit and listen passively. What has been given up? The key has disappeared! Let's try to find unity! Until now. Now we shall try to probe deeper into this story. what it means: "twelve-note Have you ever looked at a work of that kind? It's my belief composition. it's a matter of creating a means we could discuss to express the greatest possible unity in music. So: what is music? Music is language. Unity.'* that ever since music has been written. and people have unconsciously had more or less of an idea of them. human being wants to express ideas in this language. it's important to talk about these things I mean things so general that everyone can understand them. " Turning now to music. . after all. So in music. What is this twelvenote composition?" And what preceded it? This music has been given the " dreadful name atonal music.THE PATH TO TWELVE-NOTE COMPOSITION This year I was It's Schoenberg's. the aim is to make as clear as possible the relationships between the parts of the unity. Mondsee on this subject. it's to some extent historical. Perhaps. everything else is still there. since meant "atonal" means "without notes. to be is the establishment of the utmost relatedness between all com- ponent parts. Unity is very general. surely the indispensable thing if meaning is to exist. all the great composers have instinctively had this before them as a goal." but that's meaningless. as in all other human utterance. There we have a word all day. Such keys have probably existed in all ages. main key kept reappearing. To out this main key more definitely. All the things familiar to us from primitive life must also be used in works of art. in the recapitulation. thematic development can produce many relationships between things. it's the most abstract music known to us. major has been This stage was preceded by the church modes. tonality. finally remained. of which only the two keys. distinguished from minor. Comprehensibility is the highest law of all. of an " idea. There was a A main key crystallise in the exposition. since the seventeenth century. No! Beside that. and it was natural for the composer to be anxious to demonstrate this key very explicitly. 26. in the development. it was left and returned to. Schoenberg uses the wonderful word " " comprehensibility (it constantly occurs in Goethe!). piece had a keynote: it was maintained. P. like genders. Canonic. 43 . and that's where we must look for the further element in twelve-note composition. there are things that look forward to the most important point about twelve-note composition: a substitute for * Cf. It constantly reappeared. have to keep picking out these things because Something had to come and There are two paths that led unavoidably to twelve-note composition. man only exists insofar as he expresses himself. contrapuntal forms. of producing unity. by looking back at its predecessors. it wasn't merely the fact that tonality disappeared and one needed something new to cling to. which was selected. There must be means of ensuring it. It was the principal key. Men have looked for means to give a musical idea the most comprehensible shape possible. " The most splendid example of Art of Fugue " at the end of this is Johann Sebastian Bach. there were codas. Returning to tonality: it was an unprecedented means of shaping form. These two have produced something that's above gender. I want to say something. and obviously I try to express it so that others understand it. work contains a wealth of re- lationships of a wholly abstract kind. What did this unity consist of? Of the fact that a piece was written in a certain key. and this made it predominant. What is a musical idea? (whistled) " Kommt ein Vogerl geflogen "* That's a musical idea! Indeed. in which the I that's disappeared. Unity must be there. that's to say seven keys in a way. etc. Throughout several centuries one of these means was tonality. who wrote the This his life.musical ideas. Since Bach. I'm discussing something restore order. (Perhaps we are all on the way to writing as abstractly). our new system of twelve notes. Music does it in musical ideas. there was another very important thing! But for the moment I can't hope to say in one word what it is. Although there's still tonality here." but Schoenberg went through every dictionary to find a definition he never found one. This means the main key is at times pushed to one side. At " first one did think. we had the '* We don't need these relationships any more. one increasingly used substitutes for them. that instead of chords of the sub-dominant. In fact I disturbed him with " on the way to something quite new. Where has one to go. whatever can it be?" (The first beginnings of this music are to be found " in the music of Jacob's Ladder. to tell him I had read in some newspaper where a few groceries were to be had. then?" So it came about " 44 . inhibitions of the most frightful kind had to be overcome. Berg and I wrote before 1908 belong to this stage of tonality. " Do I really have to come down again?") The substitutes became so predominant that the need to return to the main key disappeared. and he explained to me that he was " we want it. Naturally this was a fierce struggle. and this finally led to the break-up of the main key. this stage lasted. If This whole upheaval telling you here is really my life-story. tonality too." The time was simply ripe for the disappearance of tonality. dominant and tonic.What I'm started just during the century has already gone by. and what one day. where the main key often has some other key forced into it like a wedge. From 1908 to 1922 was the interregnum: 14 years. and I lived quite near I went to see him one fine morning. Schoenberg saw by pure intuition how to restore order. What is a cadence? The attempt to seal off a key against everything that could prejudice it. 11 appeared. At first one still landed in the home key at the end. And then at the cadence. But composers wanted to give the cadence an ever more individual shape. Those were the first "atonal" pieces. the first of Schoenberg's twelve-note works appeared in 1922. one wondered. but gradually one went so far that finally there was no find the first longer any feeling that it was necessary really to return to the main key. Here I am at home now I'm going out I look around me I can wander off as far as I like while I'm about it until I'm back home at last!" The fact that cadences were shaped ever more richly. " Is that possible.") I'm sure it how tonality suddenly vanished. All the works that Schoenberg. go into another tonality here and there." this. The matter became really relevant time when I was Schoenberg's pupil. then it was about 1908 when Schoenberg's piano pieces Op. Since then a quarter of a For goodness* He didn't tell me more at the time. the panic fear. our ear is satisfied without feeling. when I began to compose. (When one moved from the white to the black keys. nearly a decade and half. But already in the spring of 1917 Schoenberg lived in the Gloriettegasse at the time. though. It was possible to tonality. will be very useful to discuss the last stage of tonal music. We breach in sonata movements. to find historically started until finally. and then altered even those it led to the break-up of The substitutes got steadily more independent. and does one in fact have to return to the relationships implied by traditional harmony?" thinking over points like that. and I racked my brains sake. the sixth above the minor subdominant (in C. Suppose I'd written an * ' " in the style of the Gurrelieder? opera works in the same thing. music has quite simply given up the formal principle of tonality. This was the point where even classical composers often wandered far from the home key and used resources that had a fatal effect on the key at the very place where it was felt Certain chords and particularly important to let the key emerge clearly. they were misunderstood too. Look at Schoenberg! Max Reger certainly developed. (15th January. Beethoven and Wagner were also important revolutionaries. the chord F-A flat-D flat." Why don't people Naturally it's nonsense to advance understand that? Our push forward had to be made. 1932) II what led to the disappearance of tonality. You're them out. You surely know that the whole system is built on the fact that one regards the different notes of the scale as degrees and can 45 . Let's take another look at still are The desire to set up material contradicting the chosen main key even in the " " harmonic sense one could say. radicalising effect. it was a push forward such as never was before.) This example is itself enough to show clearly the path that could lead to twelve-note composition. and this is highly revealing. " social objections. I've tried to make this stage really clear to you and to convince you that just as a ripe fruit falls from the tree. too. There people who base their composition on tonality. And never in the history of music has there been such resistance as there was to these things. he could reel off fifty style. for example. and. to limit the district known as tonic and then where one wanted to show up to drive in wedges finally led to the very place these contradictions in a special light the cadence. the Neapolitan sixth. something new. that wasn't in a key any more. listening to someone who went through all these things and fought All these experiences tumbled over one another. firmly and consciously. deriving from this. the fiat second of C major. harmonic relationships had a radical. because they brought about enormous changes in style. We find it downright impossible to repeat any" Schoenberg said. as a man develops between his fifteenth year and his fortieth. In fact we have to break new ground with each work: each work is something different. the minor subdominant (F minor in Q. they happened to us unselfconsciously and intuitively. How do people hope to follow this? Obviously it's very difficult. but stylistically there were no changes. even though a quarter of a century has gone by since then.that gradually a piece definite was written. In fact there was no longer any reason to return to the basic key. scale is complete." example you will find very striking is the end of Brahms' The cadences found here are astonishing. If we do this for each degree of the scale. and so is the way its really remarkable harmonies already take it far away from tonality! An Johannes Brahms. and that meant the end of tonality.take the relationships of the individual degrees in various ways. major. what emerges? The chromatic C D scale and the twelve-note sharp). F sharp-A fiat-C-D I can exploit the double meaning of all these chords so as to move elsewhere as fast as possible. " Parzenlied. one is D. Parzenlied. the other there isn't merely one supertonic but two. end of work 46 . After all. in flat. is Another means of modulation the augmented five-six chord (in C major. that's to say the path where one moves by semitones. The chromatic path. Such chords could be used without preparation and without resolution. Last time we discussed chords built at a six-note chromatic passing chord from the whole-tone scale. (22nd January. Commissions went out. for instance. Franz Schreker. The content is catastrophe. to Richard Strauss. The whole-tone scale consists of only six notes. The whole-tone scale: it's nonsense to believe this originates in Oriental or Far-Eastern music! Its origin is simply and solely the urge for expressiveness (" Hoiotoho!" in Wagner's " Walkiire "). never calling things by their right name using one substitute after another for the basic chords preferring to leave open everything that's implied. So a state of suspended tonality was created. really dead. or E flat47 . 34. for example. that's the nature of twelve-note composition! " " To illustrate this. and arrived (F-A-C sharp-G-B-D sharp.So it was not a matter of someone's saying. there's I want to prove to you no point in going on dealing with something dead. and intuitive discovery. " Any kind of unity is possible!" This way of circling Schoenberg said. All twelve notes came to have equal rights. Wagner. Music for a Film Scene (Op. written Schoenberg's publishing house in Magdeburg had commissioned " " A a number of prominent composers to write music to accompany a film scene. Once that's proved." and in Schoenberg's orchestral work with the same title. panic fear the sense of everything that happens as the music (29th January. Something else eating away " the old tonality! Its first use in six-note chords was by Debussy in Pelleas and Melisande. and also to Schoenberg. In Wagner. Their origin is melodic. In the end our ears no longer made us feel we had to intervene. had begun. 1932) IV Today we that it's shall examine tonality in its last throes. harmony is of the greatest importance. unfolds. " How would it be if we did without tonality?" There was prolonged and careful consideration. and This is roughly: threatening danger. 1933) m Brahms is a much more interesting example than. The cliches simply disappeared. in 1930) will be played. but Brahms is in fact richer in harmonic relationships. actually to introduce the keynote. bringing the colossal impression. You see. the purely theoretical side had given out. The tonic itself was not there it was suspended in space. On my way there I decide I'd rather I act on the impulse. go on in America! lliat's modulation! travelling and finally end up out. Schoenberg called on ZemUnsky for help. Relationship to a keynote became ever looser. look what else happened! Schoenberg's Song Op. It was unendurable. it We Berg and I will get into my biography. and immediately felt You must write something like that. it would already have been disturbing if one had truly taken one's bearings by the tonic. get into a tram. especially at the end. music by Schoenberg that's no longer in any key. Now you have an idea how we wrestled with all this. and he dealt with the matter negatively. Then I was supposed to write a variation movement. but I thought of a variation theme that wasn't really in a key at all. Indeed I did go on to write a quartet in C major but only in passing. the chosen keynote. stay in the country. the song has two " In diesen Wintertagen " (C major). I finished the movement it was still related to a key. everywhere we see the unity with what happened earlier. Every time we pupils came to him something else was there.G-B-A-D With flat-F). It was frightfully difficult for him as a teacher. 14: "Ich darf dankend an dir niedersinken " (last bar in B minor. I'd been his pupil for " three years. This opened the way to a state where one could finally dispense with the keynote. 1908. too!" Under the influence of the work I wrote a sonata movement the very next day. we approach the catastrophe. all this Simply by adding one such chord to another that's we produce a twelve-note chord. Schoenberg's Chamber (fourth-chords 1). but in a very remarkable way. 1906. The possibility of rapid modulation has nothing to do with this development. it's completely clear. It made a In that movement I reached the farthest limits of tonality. I say this. By pure intuition. not so that but because I want to show that it was a development wrested out of feverish struggles and decisively necessary. At that time Schoenberg was enormously productive. his uncanny feeling for form had told him what was wrong. to extend tonality we took steps to preserve tonality we broke its neck! go go out into the hall to knock in a nail. sharps in its key-signature). In 1906 Schoenberg came back from a Chamber Symphony. so to speak" suspended tonality !" But it was all still related to a key. On the contrary. is invisible. Both of us sensed that in this sonata movement I'd broken through to a material for which the situation wasn't yet ripe. come to a railway station. in order to produce the tonic. amid frightful struggles. went through all that personally. analogously constructed. in fact. no longer needed. invisible. nicht Now 48 . just Symphony because all this precisely because I went on in order to safeguard the keynote. The key. the George songs it would also be possible to make out a key. especially toward the end." So there's nothing new here. 15. it need hardly be used to emphasise. one could conceivably take it as major chord at major and add a recall the first n G G the end. (4th February. everything hangs together. n and V: no more return to the tonic. " Even if we still have at the end to produce a relationship to the tonic. 15! Nos. . No. YE Only the means used are different. George-Lieder Op. In the end we said to ourselves. 14 (" Ich darf nicht dankend . To anyone with a refined sense of form it was all over. No. In No. is * This is the end!* Anyone can tell when a piece over. The song returns to its opening. VII (accompaniment for one hand alone). the way Schoenberg returns at the end to what happened at the beginning! Arnold Schoenberg. of with a key-signature of two sharps and still ending in B minor. knows where Now let's look at Schoenberg's George songs Op. no-one the one ends and the other begins. anyway. "). and a repetition would sound trivial to anyone of sensitivity. 1933) Clearly this period really started with the George songs Op. everyone feels the end anyway. .Here we do still find a key but no cadence. You'll song of Schoenberg's Op. 15. 49 . the passage remains obscure. flat it doesn't close in any key. and the in bar 12 of the first piece. how does Schoenberg come to end with the has everything that happens to do with E flat? problem by coming at it One must * In fact try to solve the from all sides. As flat comes as early as bar 2 of this piece. and that in fact it's impossible to fix a dividing line between old and new. It has been suggested that " " " " Webern's No. 15. except No. 2 mean that he was making two separate points. flat E occurs E 50 . 11: three piano pieces (written about 1908). Schoenberg's Op. No. we still find the very important factor that governed music for centuries this exploitation of relationship to a key.'Sen -sou. E How E No. There's hardly a single consonant chord any more. 2: 1 ask in the bass note E flat? What first same way. partly to demonstrate again how gradually the change came about. up to bar 13 every note in the chromatic scale flat!* occurs. Let's look at the opening.B. 1 and No. L.unct die aold-nert Bin. this reference to a tonic is meant to show how much all these changes still took place within the bounds of harmonic progression. Rather than answer the question at once I want to show you some more examples. 1 : ends on The final bass note is does the piece come to have E flat as a tonic? the fundamental. Arnold Schoenberg. 11 Why is it still so there. both about the second piece. Please understand. But thoug-h things had gone so far. George-Lieder Op. and not so any longer here? What's the explanation? This question really takes us into the inmost mystery of twelve-note music. " incredibly difficult. none of " of them may occur again. but had been sensing it for a long time. That makes twelve notes: none is repeated. 12. D minor (the keynote could also be B quite feasible. The whole course of the piece shows could be flat. At we were not conscious of the law. This relationship was always there up to now. The inner ear decided quite rightly that the man who wrote out the chromatic scale and crossed off individual notes was no fool. " about 1911 1 wrote the Bagatelles for String Quartet " (Op. is related to the What. (Josef Matthias Hauer. Why? Because I had convinced myself. the piece is over. does this show us once again? One's tonal feeling is aroused." We It had to be given its due that was still possible at this stage. It isn't easy to talk about all the things we've been through! There we still see the key given. It was so ambiguous. the B flat in the bass (B flat triad!) is in fact there." that the note " came through. Here " I had the feeling. too. then " Gleich und Gleich " in 1917) begins as follows: G sharp -A-D sharp -G. 4. all very short pieces. but it proved disturbing. incomprehensible. went through and discovered all this in his own way). composed sharp-B-F-C sharp. An inevitable development of this law was that one gave that time F 51 . lasting a couple of minutes perhaps the shortest music so far. then a chord E-C-B flat-D. then. Are these chordal progressions the right ones? I putting Am down what I mean? Is the right form emerging?" What happened? I can only relate something from my own experience. My Goethe song. if one note occurred a number of times during some run of all twelve. and it was been there already. but with chromaticism. "This note has It sounds grotesque. until all twelve notes have occurred. D-F-D at the beginning that D quite clearly how through its entire layout everything tonic E flat: but this E flat is not introduced as tonic. in some way " got its own back. Things have asserted themselves that made this "key" simply impossible. either directly or in the course of the piece. for example. In this musical material new laws have come into force that have made it impossible to describe a piece as in one key or another. sensed that the frequent repetition of a note. The most important thing is that each " run twelve notes marked a division within the piece. In short. and is held for three bars. Then in bar 16 there's a second idea which though not in major does approach the key." Much later I discovered that all this was a part of the necessary development. When all twelve notes have gone by. (Four Songs Op. No. One day Schoenberg intuitively discovered the law that underlies twelvenote composition. 9). idea or theme. " The most important thing in composing is an eraser!") It was a matter of " constant testing. Individual parts in a polyphonic texture no longer moved in accordance with major and minor. a rule of law emerged. here we don't see it any more. In my sketch-book I wrote out the chromatic scale and crossed off the individual notes.following explanation is but the B flat never flat comes). (Schoenberg said. All twelve notes have equal rights. at the goal. soon there was an thematic. Twelve-note composition is " " not a substitute for tonality but leads much further. mirror canon). and which in Beethoven became most important variation form. Mahler. What is a canon? piece of music in which several voices sing the same thing. the twelve notes have come to power and the practical need for this law is completely clear to us today. note-repetition that's forbidden.Imagine. Schoenberg's string quartet (in minor) the accompanying figure is thematic! This urge towards unity. i. There's no longer a tonic. (12th February. too. etc. This proves that it really did develop quite naturally. based on a fugue theme (answer. One means of doing it was tonality. all that follows is derived from this idea. often what is sung occurs in a different order (crab canon. which is the primeval form. An example: Beethoven's Ninth Symphony. The crowning glory of polyphonic music was the fugue. It is varied. but it is unity. it was the same thing but differThematic unity came with homophonic music. only at different times. an urge to deepen and clarify the unity. finale theme in unison. 1932) VI Before we knew about the law we were obeying it. Unheard-of things happen." Theme: example: Beethoven's C-F-G-A-F-C-G-F. is really Now something very remarkable emerged. One of the earliest surviving polyphonic pieces is a canon an English summer canon from the 13th century. but the fugue.). Indeed. yet again. We can look back at its development and see no gaps. ent! A Why does this crop up again? Six easy variations on a Swiss song. and yet it's constantly the same thing! first D A 52 . Schoenberg. theme is given. twelve parts. If one of them is repeated before the other eleven have occurred The twelve notes. then backwards! You won't notice this when the piece is played. the succession of twelve notes a particular order. leads of its own accord to a form the classical composers often turned to. In this sense variation form is a forerunner of twelve-note composition. and perhaps it isn't at all important. in a firmly fixed it would acquire a certain special status. Great composers have always striven to express unity as clearly as possible. order. form the basis of the entire composition.e. Another was provided by polyphony. but within the order fixed by me for the twelve notes none may be repeated!) Today we've arrived at the end of this path. and each of them has begun the series of twelve notes! (It isn't sixty parts. relationships. An " Further development of unity in Brahms. stretto. attempt to create some kind of unifying thematic connection between the principal part and the accompaniment We see an absolute pull from homophonic music back to polyphony. one composes as before." Something will stick in even the the Sonnet from Schoenberg's harm done So there will be a multiplication of all the things that were naivest soul. (19th February. The most comprehensive unity results from this. felt in a way. Goethe's primeval plant. (Here too the result can be rubbish. 1932) vn Last time. canon form we mentioned last time: everyone sings the same " Shut the door. too. So an idea should be presented hi the most multifarious way This urge to create unity has also been the Remember possible. which is at the bottom of everything. is backwards movement cancrizan. An ash-tray. ** This path How has such an unusual degree of unity come about in twelve-note music? Through the fact that in the course of the row on which the composition is based no note may be repeated before all have occurred. If I repeat several times. The course of the row can be repeated several times. as Schoenberg said about thing. another is mirroring The development of tonality meant that these old methods of presentation were pushed into the background. there's no in tonality. Something that seems quite different is really the same. This law developed gradually. " felt by all the masters of the past. in thematic development. but they still make themselves One such way inversion. even quite identically. even in classical times. is always the same." or. a questionable composer. found its fulfilment. " I am an ass. All the works created between the disappearance of tonality and the formulation of the new twelve-note law were short. seen from all sides. on its own. strikingly short. The longer works " " written at the time were linked with a text which carried them (Schoenberg's 53 . the stalk no different from the leaf." then unity of that kind is already established.You'll already have seen where I am leading you. but on the basis of the row. unity was mostly felt only unconsciously. For the rest. the root is in fact no different from the stalk. and the leaf no different from the flower: variations of the same idea. and yet different. starting from Goethe's "primeval plant." led to every-increasing refinement of the thematic network." The same law applies to everything living: variations on a theme "that's the primeval form. as in tonal composition: nobody blamed major and minor for it!) If an untutored ear can't always follow the course of the row. bound up with the urge toward thematic development. as in " Serenade. on the basis of this feed series one will have to invent." we dealt with the *' other path. aimed at along the second path. but it would have been impossible without using both And here the urge toward maximum unity the paths we have described. At the time everything was in a state of flux uncertain. that's to Die GluckHche Hand. the result of chance. Considerations of symmetry. adherence. With the abandoning of tonality the most For tonality was important means of building up longer pieces was lost. the idea was then subjected to careful thought. dark." Berg's and Erwartung say. However much the theorists try. is so powerful that one has to consider very carefully before finally committing oneself to it for a prolonged period. for example one aims at as many different intervals as possible. same key!" This analogy with fostered. inversion. inversion of the cancrizan. itself Each of these four forms can be based on each of the twelve degrees of the Bearing these twelve transpositions in mind. etc. (thrice four or four times three notes. I should like to say something today about the purely practical application of the new technique. scale. groupings Our Schoenberg's. the recapitulation will naturally return to it. How is the system now built up? Our inventive resourcefulness discovered the following forms: cancrizan. " supremely important in producing self-contained forms. one works as before. regularity are now to the fore. Only when Schoenberg gave expression to the law were larger forms again possible. or certain correspondences within the row symmetry. each row can manifest in 48 different ways. very stimulating and exciting. it's How Adherence is strict. Inspiration. a difficult moment! Trust your inspiration! There's no alternative! So the row is there. and we didn't create the new law ourselves it forced itself overwhelmingly on us. (At least this is how it strikes us now). for instance). development starts. does the row come to exist? It's not arbitrary. linked with an intuitive vision of the work as a whole. subdominant. arranged with certain points in mind. As if the light had been put out! that's how it seemed. Berg's and myrows mostly came into existence when an idea occurred to us." " " Wozzeck "). At once re-casting. as against the emphasis formerly laid on the principal intervals dominant. often burdensome. There aren't any others. But first I'll answer a 54 . if you like. The original form row occupy a position akin to that of the " main key " in " in the music. analogy. but it's salvation! We couldn't do a thing about the dissolution of tonality. This compulsion. so that there wasn't time to notice the loss. 1932) VIH Linking up with my last remarks. Four forms altogether. We end important. mediant. here we find earlier formal construction is quite consciously the path that will lead us again to extended forms. For this reason the middle of the octave the diminished fifthis now most and pitch of the earlier For the rest. almost as if taking the decision to marry. Here there are certain formal considerations. (26th February. with something extra-musical. just as one can follow the gradual emergence of themes in Beethoven's sketchbooks. there seven: our adherence to the row is indeed a particularly strict adherence. so that in a sense it's the dominant of the first part (" tonic "). it destroys constantly comprehensibility." But I can also work as a rule. adhering to nothing except the row. Op. Only now is it possible to compose in free fantasy. Bach wanted to show all that could be extracted from one single idea. From bar 8 onward the notes are differently distributed among the individual instruments. (Naturally any note can also occur in whatever octave one pleases. In bar 7 the cancrizan of the row occurs in the flute part. In the third movement the row is at first divided between horn and bassoon. with a certain regularity the horn picks out notes of the row for its melody. Only after the formulation of the law did it again become possible to write longer pieces. We that way. " " In this sense the Art of Fugue is equivalent to what we are writing in our twelve-note composition. Schoenberg's Wind The row is Quintet. " As an example. At least it's impossible to write long stretches of music in The twelve-note row " is. This is how unity is ensured.How is free invention possible when one question put to me by one of you: has to remember to adhere to the order of the series for the work?" Strictly speaking. J. but adherence of this kind has always existed. theme. or a fifth higher if you like. Practically speaking. S. in the strict polyphonic forms such " as canon and fugue. One invents on this new basis. here the chromatic scale. and the second of which lies a fourth lower. " What can I do with these few notes?" There's forever something different yet the same. Here we find that pedal-like repetitions of the same note don't infringe the basic law. What else could this work be but the Fugue answer to the question. which are tied to the chosen theme. because of the unity that's now been achieved in another way . To put it We want to say " " 55 . what has been said before. something surely sticks in the ear. the answer might be this: "Couldn't one ask the same question about the seven-note scale?" Here twelve notes are the basis. and we've often found that a singer involuntarily continues the row even when for some reason it's been interrupted in the vocal part. even if one's unaware of it. In Bach it's the seven notes of the old scale that are the basis. that's to say much more freely. One can see at a glance that the row falls into two parts that are of parallel construction as regards intervals. the row ensures unity. the details of twelvenote music are different. But now in a quite new way I can invent more freely. B flat-D-E-F sharp-A flat-F. everything has a deeper unity. 26: E flat-G-A-B-D flat-C. Bach's Art of " is based on a single theme. but as a whole it's based on the same way of thinking. not a without thematicism. As we gradually " don't want to repeat. there must gave up tonality an idea occurred to us: " be something new! Obviously this doesn't work.) So this is the ** primeval plant yet always the same! " we discussed recently! Ever different and Wherever we cut into the piece the course of the row must always be perceptible. In the accompaniment to the theme the cancrizan appears at the beginning. since it enhances comprehensibility. When this true conception of art is achieved. Even the Netherlander didn't manage it. and then I look for the right place to fit it in. The first variation is hi the melody a transposition of the row starting on C. I know how I invent a fresh idea. This is a particularly intimate unity. written in 1928). So the entire movement is itself a double canon by retrograde motion! Now I must say this: what you see here cancrizan. canon. and you must allow that there are indeed many connections here! Finally I must point out to you that this is so not only in music. now an inversion? Naturally that's a matter for reflection and consideration. So here there are only 24 forms. that would be ludicrous. constantly the same thing isn't to be regarded as a "tour de force". the greater becomes the identity of everything. I was to create as many connections as possible. This variation is itself the midpoint of the whole movement. We find an analogy in language. 1932) 56 . is F-A flat-G-F sharp-B flat-A. etc. It's peculiar in that the second half is the cancrizan of the first.quite paradoxically. unity also has to be created there. and how it continues. and it's our faith that a true work of art can come about in this way. 21. then number forty-five. ! The row An example: the second movement of my Symphony (Op. It's for a later period to discover the closer unifying laws that are already present in the works themselves. He even turns a phrase backwards. in alliteration and assonance. after which everything goes backwards. The old Netherlander were similarly unclear about the path they were following. And I leave you with an old Latin saying: SATOR AREPO TENET OPERA ROTAS (2nd March. since there are a corresponding number of identical pairs. then there will no longer be any possible The further one presses distinction between science and inspired creation. now a cancrizan. Greater unity is impossible. only through these unprecedented fetters has complete freedom become possible! Here I can only stammer. How does a man keep the 48 forms in his head ? How is it that he takes now number seven. In the fourth variation there are constant mirrorings. Everything is still in a state of flux. I was delighted to find that such connections also often occur in Shakespeare. forward. The accompaniment is a double canon. Karl Kraus' handling of language is also based on this. and " Here Harmonielehre" in the end this development led to Schoenberg's there's certainly some underlying rule of law. E flat-E-C-C sharp-D-B. and finally we have the impression of being faced by a work not of man but of Nature. * Ger. An important saying of Schoenberg's : compression always means extension ! " " and " development " of themes."* less ** Mozart and Haydn have they already create room for all gardener digs a furrow where he buries his shoots. than Beethoven. Of my notes. that during the analysis he had in fact realised that the second movement of his quartet was formally an exact analogy with the Beethoven Scherzo. We analysed classical works almost exclusively. 2. 14 No. I used to go once a week to his flat in Maria Enzersdorf. He said of the latter. 57 . the thematic side is secondary. just as the " Not until fected. I shall here quote only a few that are of very general importance. 22." The magic square in things) which Webern arranged the saying clearly shows the basic principle of twelvetone technique the equal status of basic set. But thematic exactness that happens in sonata form. What has twelve-tone technique to set against this? To develop means " to lead through wide spaces. " To supplement the lectures I should add a number of notes I made between September 1936 and February 1938 when I was working my way through the theory of form as Webern's private pupil. (Bach Distinction between unfolding and Beethoven). " DurchJf tihrung " ("leading-through") * the "development section" in sonata form.*' with which Webern his lecture on March 2nd. 21 and his Quartet Op. cancrizan and inverted ended cancrizan. 1932.POSTSCRIPT The old Latin saying Sator Arepo Tenet Opera Rotas. only twice did he talk at any length about his own works about his Symphony Op. In tonal music. in Schoenberg they serve to produce relationships of content. could be translated as (among other " The Sower Arepo Keeps the Work Circling. when we were analysing the Scherzo of Beethoven's Piano Sonata Op. above all in Brahms. then there is Beethoven is the horizontal presentation of musical ideas pera move backward. near Modling. variation is possible by merely altering the inversion or spacing of chords. inversion. and on my way back in the train I always hastened to jot down my experiences with Webern. In his music the independently developed subsidiaiy parts determine the character of the theme. which fill a whole notebook. The primary task of analysis is to show the functions of the individual sections. I had hoped to be able to go through it with him here The piano score of my choral piece ("Das Augenlicht") was published recently (UE). Now indeed I'm eager to know whether the B. More. less. Nobody from here can go. serial technique. 1938 and 29th April. Examining the development of variation technique one has direct access to Relationship to theme or row is quite analogous. .e. About rondo form: original character of a light closing sible. Its future and that of the Association are uncertain for the time being. Festival in from the Austrian section. festival (I. six-part Ricercar its Example: the " from the Musical Offering. Will you be going there? You did once say you meant to. at the moment I've only one pupiL You have to be patient! . chorus will learn it. As a personal July 6th.B. . At the moment I am solely responsible for signing everything . So already I .VI in the first concert of the London). because the whole is more strictly tied to the row. In studying form one ought really to take variation form as early as posSchoenberg thought so too. . friends. Did you hear about the awful thing that happened when my string trio was " performed in London? The cellist got up saying I cannot play this thing!" and walked off the platform! Surely nothing like that has ever happened . i. This time there won't be a ** delegate " is ductor to be Scherchen. from our friends? Do write again very soon.M. the parts to Kolisch in London. Just in the last few weeks I've been hard at work and have completed my string quartet (Op. But the firm- ness of a codetta. 58 . because the row gives fewer possibilities of variation than the theme. In Brahms and Bruckner this happens through the introduction of developing (contrapuntal) elements. 1944." development tends to take away from the rondo its movement. . I did receive an invitation but I shall hardly be able to get away.S. recollection of my from the thirty-one letters I received dear master and friend. hence the use of rondo form for middle movements as well. The conPlease send to hear me a from one's lot of news and write often.The contrast between firm first and loose is a fundamental one. subject (presentation of the theme!) is different from that of a Even in Bach's fugues this contrast can be seen in the episodes.C. . In any case it's forbidden (by law) to call itself "Austrian" any more.C. 1938 Now of all times one needs was eagerly expecting your news. in Mahler new ideas are unfolded in the episodes. Performance 17. Now it's off to America. before! What else do you hear from the world. 28). some quotations from him between April 29th. But Schoenberg once said: the row is more and less than a variation-theme. too. It's a business with my teaching. .21th July. 1939 Yes. 2. But they'd just be totally misunderstood. which songs were you thinking of? It's very important to choose the right ones. from Op. Well. It's very hard for performers and listeners to make anything of them. I too believe it would be best for you and yours to stay where you are in the present circumstances. Let's hope.. Very good. 12.II under Erich Schmid (in Winterthur) . cause me constant concern and oppress me beyond measure! But we do have a " foothold " and in my opinion an impregnable one. so I have never for one single moment lost heart (either on my own account or in my worries about others!). 4.M. everything I've mentioned is thirty years old already! And still " have to worry! As if it were a matter of world premieres. dear friend! was very pleased to have your news about the performance of my Passaon the 7. my dear Reich! Thank you very much! Anything of the sort did seem quite out of the question for me! I take it as a good omen! cagjia Now." Der Tag ist vergangen and gang. and that perhaps it was just as well that what you once intended didn't come about. (in Basel).e. in fact! certainly work! Look. E..C. W. Rest assured that all these difficult problems are very much on my mind. " So ich traurig bin " (that has never yet been sung!) or Ein" " " Gleich und Gleich. " from Op. that makes all the difference. I * Dr. That would be a group of 5 songs that ought to come in that order! As far as instrumental pieces of mine are concerned. 4 and 5! That would Otherwise the violin pieces would be a better idea than the cello pieces. long time yet (i. So I should set great store by its coming off! I am delighted that you thought of that piece. look.. Seen from " '* " " this foothold the authorities you mention (that's what one has to call " them!) have always looked to me like ghosts!" I Can for what 2Qth October. In certain circumstances my visit could even be of far-reaching importance for me. If an invitation to me could be arranged I should be very glad and should naturally come very gladly \* . if there were a quartet that would 5). So I wish you as long a stay as possible." from Op. As far as I'm concerned." "Kahl reckt der Baum". 3. Dear friend. We spent some memorable hours with Webern in Winterthur and Basel in February 1949. But maybe things I will change again after all. 59 . until you find something more like what you want). then perhaps Nos.R.g." If only I could at last be understood a little! But what you are doing is splendid! So About your lecture: nothing theoretical! keep my suggestions in mind. Definitely not those! Not because I don't think they are good. Werner Reinhart arranged the invitation.. Nothing experimental! Create a favourable atmosphere for the performance of the Passacaglia! play if not all 5 movements (Op. "Dies ist ein Lied. the concert planned for the LS. 1939 perhaps be of assistance? You surely know you can count on me my feeble powers are worth! Let's hope you'll be able to stay where you are for a long. 9th December. . Already there are all sorts of things to write about it it. now I have to do work for the U. Now I'm preparing the score. thick vocal score.Rather say how you like this music! People makes a good impression . " " " overture is basically an adagio "-form. I was left out in the cold! So I had to take what there was. long time.. . but sometimes with the effect of a sostenuto. I had to put off work on the Cantata (Op. Now. I beg all could be taken of my work ! The third movement. 1939 wanted to reply at once. Beethoven's Prometheus and Brahms* " Tragic" are other overtures in adagio-forms. I settled on a form that amounts to a kind of overture. But now it's ready. . . Yet it's a strict fugue... but U. But the subject and counter-subject are related like antecedent and consequent (period). otherwise might already be finished. cl. trmbn. . tuba. 29) is now complete part double fugue. But I hope it will be possible quite soon. but there were reasons. If only some notice at * dear chap. the post was liquidated. ''Variations for Orchestra" (Op. quickly! It's a devil of a situation.. I haven't written to The piece lasts around quarter of an hour. trp. in September I lost my steady job at the Radio. the Cantata (Op. eel. I think. str. For choir. my of you. The orchestra is small: fl. timps. but the recapitulation of the first subject appears in the form of a development. And now. 30). because I was again so very glad to have your November 2nd). . will surely make it available as quickly as vertical in all other respects. alas.. More about this work next time. I sat there for weeks and weeks. ob.E. a thick. (with double bass). and that's also the title. And I wouldn't and couldn't do anything else. and that Imagine. (I'm not telling more for the time being!) Yes. I think I even said so to you.. bass cl. harp. not sonata form! . One could also speak of a scherzo... orchestra. say your piece and exert your influence. in general and in particular. 1941 you for a long. will believe it from you. soprano solo and also of variations! I letter (of . At the moment I haven't a single pupil. In fact there's again the synthesis.. but not that it would need that amount of time. the presentation is horizontal as to form. 29) for a time. but based on variations. something quite simple and perhaps obvious has emerged. but I was quite buried hi It's* constructed as a fourmy work.E. " " so this element is also present. and elements from the other mode of presentation (horizontal) also play a part. hn. So. March I 3rd. My possible. There isn't a copy ready yet. very quick tempo almost through- out. 60 ... Certainly I'd sensed that it would be difficult. except things that absolutely couldn't be postponed. In fact needed that long to get to the end of the score of my orchestral variations. so that you have counter to possible objections and can throw at least a certain amount of light. etc. So. again. but tries on the contrary to continue it into the future. that doesn't reject the development that came then. that came out very number of things are clearer than in my manuscript. Yes. a new one. a let's hope everything should like else will! very briefly to tell you a little about the work. something " Like Josquin orchestrated? The answer would have to be an energetic no **! What. for instance. building a tonality. I should like to talk quite I an effective differently about it to you personally. and whose formal construction relates the two possible types ofpresentation to each other. Anyway it's impossible to ignore them. 1941 The copy of my Variations is ready it's a photocopy. But a few briefly. Why.May 3rd. then? I believe. if there is still to be meaningful expression in sound! But nobody. So point one of the whole affair is approaching completion on time. that's to say. So do understand me aright. and doesn't aim to return to the past. is going to assert that we don't want that! So: a style. and tonight Schlee himself is taking it with him to Switzerland a particularly good idea for reasons of safety. then? Nothing like any of that! Now you would have to say unequivocally: this is music (mine) that's in fact based just as much on the laws achieved by musical presentation after the Netherlanders. Correct! But that in fact touches on the first Won't the reaction when they " ' most important point: it would be vital to say that here (in my score) there is indeed a different style. many notes they're used to seeing. but one that uses the possibilities offered by the nature of sound in a different way. whose material is of that kind. really. preceding forms followed tonality. What kind of style. Now I should be glad to explain the piece to you from the score. namely on the basis of a system that does " " relate only to each other (as Arnold has put it) the 12 different notes customary in Western music up to now. well. but doesn't on that account (I should add to clarify things) ignore the rules of order provided by the nature of sound namely the relationship of the overtones to a fundamental. Strauss. important things still. in R. 61 . nor does it look like Bach. but what sort? It doesn't look like a score from before Wagner either Beethoven. when opportunity see the score be offers. there's nothing there '"!!! Because those concerned will miss the many. as the earlier. Exactly following natural law in its material. Is one to go back still further? Yes but then orchestra! scores didn't yet exist! But it should still be possible to find a certain similarity with the type of " " archaistic ? presentation that occurs in the Netherlanders. on the basis of the law of the row. but rhythmically augmented. which unfolds in full. But through all possible displacements of the centre of gravity within the two shapes there's forever something new in the way of time-signature. 1941 I'm terribly sorry to be so long answering your long. Six variations follow ceived as a period. etc. it is con" '* in character. But it's reduced still more. Simply compare the first repetition of the first shape with its first form (trombone or double-bass!) And that's how it goes on throughout the whole piece. the fourth the recapitulation of the first subject for it's an andante form! but in a developing manner. the The "theme " fifth.of the Variations extends to the first double bar. so. if at all. the third the second subject. but is introductory one to the next double bar). the two tempi of the piece as well (pay attention to the metronome marks!). I think. In miniature! that's to say the row. Now. a recitative! But this section is constructed in a way that perhaps none of the " Nether" landers ever thought up. this first piece is complete and even written down in score. it But I must stop here! All the same I shall be glad to say more about another time. I've been completely absorbed in my work (2nd Cantata. it was probably the hardest task (in that respect) that I've ever had to fulfil! a four-part canon of the most complicated kind. the first Now But the succession of motives takes part in this cancrizan. the second two notes are the cancrizan of the first two. whose twelve notes. formally it's an introduction. The first piece in a new choral work (with soli and orchestra) that may well go beyond the scope of a cantata at least that's my plan. And I'd like to tell you a little about it straight away. that was quite something. welcome letter. everything that occurs in the piece is based on the two ideas given in and second bars (double bass and oboe!). see. But was only possible. by a repetition of the first shape (double-bass). that's to say motivic variation happens. 31) and still am. but in diminution! And in cancrizan as to motives and intervals. August 23rd. The first bringing the first subject (so to (each speak) of the overture (andante-form). That's how my row is constructed it's contained in these thrice four notes. In fact this may well be the first time it's been so completely operative. which is quite particularly in evidence here. They are followed. leads to the Coda. repeating the manner of the introduction and bridge-passage. Don't be offended. though with the use of augmentation and diminution! These two kinds of variation now lead almost exclusively to the various variation ideas. only within these limits. contain its entire content in embryo! With bars one and two. the basis is it's You the way carried out 62 . on the trombone. since the second shape (oboe) is itself retrograde. the second the bridge-passage. character. sixth variation. Op. one can straightway invent plants ad infmitum to apply to all other living matter at its deepest? " ! Isn't that be found the meaning of our law of the row. The foundations of our technique hi general are there. 30 in Winterthur 63 ." But again those relationships orchestra.I read in Plato that " Nomos " (law) is also the word for " melody. conceived rather as a * The planned performance of the finally took place on March 3rd. it. and that on December 9th. but I think I'm returning to them in a quite special sense. When one's faced.E. then you do right! Revel in sounds. 1942 I can report that I've made another fair step forward. conductors. I'd love to believe that things will stay Variations (Op. . 1942 " " So now a the way Scherchen performed news!* positive success is in sight. the melody "! But since in my case it in fact is. . It's a soprano aria with chorus oratorio and orchestra. one's thoughts are mainly (and naively) how will it sound? And one enjoys it in advance. then there must also be the right sensory impression. It's to form the first part of the planned "oratorio. voice gives out the law in this case the soprano soloist " " that's to say the but the Greeks had the same word for that as melody " Nomos. have already started preparing the material. The same law will July 31st." together with the preceding ones. now the Collegium Musicum should order it from them. another piece of the planned " is all in order and down on paper. truly the Nomos!" But agreed on in advance on the basis of canon! row hi itself constitutes a law. It's for choir and " chorale. the row takes on a quite special importance. equally naively! But when one actually performs. nothing happens any more unless it's agreed on in advance " " according to this melody "I It's the law. but at least I've recognised what's involved! In my case. only always been in music by the masters! Whether I shall bring God knows. melody the soprano soloist sings in my piece may be the law (Nomos) for all that follows! " " As with Goethe's " the as the introduction (recitative) primeval plant with this model. My " A That's it how it's off as they did. Everything going as well and pleasantly as last time dear fellow ! ! cheering thought ! it's a very My ! The U. time has been wholly taken up with it lately. " 4th September. by a first performance. and the key to . . especially orchestral." for law: So the " melody " has to " lay down the law." Now. " Variations for Orchestra 1943. you Meanwhile I've completed another piece. on a higher level so to speak. rather like the chorale melodies in Bach's arrangements. . " Op. but it needn't also be the Naturally. 30) will really be told you." About my work. You can imagine how pleased I was about your my . for which many thanks again! But this is how it was. I think the look of the score will amaze you. Whichever suit Frau Gradmann best. so. three and four (cancrizan). 5* " But formally the Art of Fugue. but once again I've hardly taken my told eyes off my work. It's all even stricter. 27) between them. I think it would be best to put these songs in the middle of the programme. 23 and 25). Because it was very important for me to check personally what it proves and I believe I was right. Before and after this group. It's a three-part chorus for women's voices with soprano solo and orchestra. I really didn't mean to make you wait so long for an answer to your letter of August 30th. etc. with a c. 4 and 5 of Op. giving me tremendously difficult problems to solve. 1943 I'm very sorry to be so overdue. and to play the piano variations (Op. 1. a double interlinking. one and four. . a bass aria. which have just appeared (in print). you directly I wanted to say a number of things to on my return from Winterthur. " Die Stille um den Bienenkorb in der Heimat" still sorry that hardly able to talk to each other alone! It did me a lot of good to be able to hear my piece. once again. 4 and 12. Hymn-like character. I move with complete freedom on ** the basis of an endless canon by inversion. a selection from the songs with piano Op. What you say about my orchestral variations gave me very genuine and equally so your plans for getting my music performed . I'm this time we were public also proved this! October 23rd. rather as Bach does with his theme in the aria is ternary. 6 in all (3 in each of Op. 3 (as far as Fm concerned those are 64 . even the most fragmented sounds must have a completely coherent effect. Another piece will soon be my finished. I've completed another piece as part of the plan I've you of several times. pleasure. but moreover sings the notes of the third backwards! So. the third (soprano) has the inversion of the second. and the fourth (bass) is the inversion of the first. Long note-values but very flowing tempo. and leave hardly anything to be desired as far as " " is concerned. I'm buried in work. a very close combination of the two types of presentation. 1943 Dear friend.the second part (alto) sings the notes of the first (tenor) backwards. also one and two. diminution.32-bar theme of periodic structure. 3. 23. and for that reason it's also become still freer. namely that when that kind of unity is the basis. two and three (by inversion). Perhaps begin with Nos. That's to say. Isn't that so? I believe the effect on the comprehensibility But I really must revert to the subject of Winterthur. I'm all for the idea of giving the first performance of these songs (even they are nearly ten years old) at the concert you plan in Basel ." By variation. When I was with you in March. . Frau Gradmann already had the 3 Songs Op. as you know. August 6th. for goodness' sake! Please do fall in with this request! January 10/A. which I organised. You're anxious to know what happened here for the 3rd ML. your kind telegram. your letter. the piano variations Op. 2 and 4 of Op. how irrelevant. it would then last about an hour. whichever of Op. So for once on a Friday (course day) evening we had something rather more enjoyable than the usual intellectual refreshment. 12. 4 she prefers. I'm glad to tell you that for a long time I've been I was very glad full * Webern's 60th birthday.). which gave me so much pleasure. 1943. courageous. how utterly unimportant. and such a magnificent effort that I can't hope to say what I feel about it! So. as has already been seen. 1944 Dear friend. one with consequences). "Du". but I don't think Frail Gradmann has ever sung them). my heart overflowing with the finest feelings! And straight I should also like to express them by calling you it. and that would be quite adequate. the pieces for violin and piano Op. And that could make up the whole programme.. and that your initiative never flags! But above Holderlin all I was pleased on your account.C. now at last I can send my very heartfelt thanks for everything. 27. and perhaps 1. because it again reminded me in the best possible way of something I ought to thank you for once again and very specially. and she came on with us to Rate'. my dear friend.S. self-sacrificing loyalty! This 5th of December in Basel (an afternoon concert of the Basel section of the I. I shall go on and use for it makes things much more friendly. your unflinching. 11 and a brief address by myself. the Apostels and this gave me particular pleasure Frau part Helene (Alban Berg's widow.R.R. no: a performance! Don't even mention it that . I embrace you.) its success. with the first performances of the Songs Op. So two groups of 4-5 songs. December 3rd. we met at " " course those taking Rate' in the evening. 65 . That's how it was! February 13rd. it was in fact the day for the in the course.R. that was yet another gracious deed on your part (and.M. W. to live means to defend a form puts it in some such way. 7. who was all ready with a splendid buffet. only one thing: don't tie yourself to the date mentioned!* Don't make it a direct birthday celebration no. my dear Reich! No longer! As for the date. W. 23. 1944 to have your letter of February 1st! It again showed me how of splendid plans you are. We my wife and I had already been to her in Hietzing in the afternoon. the pieces for violoncello and piano Op.the ones I'd like. W. W. i. compared with those of judged by the impressions they make. 1944 Naturally you can keep the score of my 2nd Cantata to study as long as you need it! What will you say about it? If. a ** concerto " (in several movements for a number of instruments). and I'm particularly . Webera's last completed work. .f for soprano and bass soli. Sehet. how well it suits the structure of the text. other works lack passage the Greeks. I'd already started to me (I'd sensed it " cantata ": Cantata No.. halves. die Farben stehen aufl" The poetic form will be matched by something correspondingly long and unified. What a lot those conductor gentlemen gards the sound! I can only say miss!" I'm very glad that you're now taking up the cudgels on their behalf! I think people will be amazed! . If I come to write another vocal work it will be quite different! At the moment I'm writing a purely instrumental piece. at least until now they've been infallibility. 2. W. Imagine the effect " in his notes on the translation of Oedipus: Again. . so that the constant regroupings (tutti. possible. choir and orchestra. soli) stand out in a As reIt's indeed turned into something quite new! clearly audible way. on a seventh piece when it became clear already) that the six pieces I'd completed made a musical whole. interested in solving this.R. too. Duration half an hour.on me when I found this intensely interested in this poet. 5* will be sent to you It should be played by as large a body of strings as as soon as possible. either as part of a larger work or on their own. ** Schoenberg's 70th birthday. rather than by their ordered calculus and 9 Need I even say why all the other procedures by which beauty is produced: I was so struck by the passage? The score of my string-orchestral arrangement of Op.e. You'll see. for example. you show them what the score of the sixth piece looks like? The sketches I made for an instrumental piece I wrote to you about them have turned into a setting of a very long poem by Hildegarde Jone: " Das Sonnenlicht spricht. my unspeakable longing! But also my unwearying hopes for a happy future! * 5 Pieces for String Quartet. W. dating t from 1909. How will you celebrate September 1 3th ?** Pass on my deepest remembrances. . May 6th. which possess me night and day. . I made some minor changes of order and grouped the six pieces as a " As for my work. 66 . arranged in 1930.R.R. Again for soli and choir (with orchestra). I decided on the latter. Webern hoped for was denied him in the flesh by his premature. tragic end. lofty works and his effect on the younger generation have fulfilled his hopes in a higher sense. The " happy " Zurich. end of March 1960 67 . but the present triumph of his uncompromising. Willi Reich. which he foresaw in his humble self-abnegation and proud future assurance. Printed in England by JOHN BLACKBURN LTD. LEEDS LDC/66 ..
https://www.scribd.com/document/108769505/Anton-Webern-The-Path-to-the-New-Music
CC-MAIN-2017-30
refinedweb
28,931
78.35
How to create a Real-Time Twitter Stream with SignalR With Twitter as popular as ever, with today's post, I show you how to create a real-time Twitter stream with SignalR. SignalR has become a breakthrough technology that all ASP.NET developers want to include in all of their web applications. It definitely blurs the lines between a web application and a desktop application and makes your users sit up and take notice of a dynamic web page that shows movement instead of text and images on a static page. I've always found SignalR a welcome addition to my tool belt for building real-time applications. I also like to include a lot of SignalR demos to show what can be done with the technology like the WebGrid editing or the real-time Like button. So today, I wanted to get into building an asynchronous, real-time, twitter stream that runs on your web site. This specific stream is meant to grab your real-time tweets on your timeline and display them as they come in. Setup Initially, I went with the standard MVC application option (File -> New Project -> ASP.NET Web Application -> ASP.NET 4.5.2 Templates -> MVC) After completing an update-package, I installed the following packages: After those packages were installed, we are ready to lay the groundwork for our twitter stream. For SignalR to work, an app.MapSignalR() is required for the Startup.cs file in the root of your project as mentioned in the readme.txt when you install-package of SignalR. Startup.cs using Microsoft.Owin; using Owin; [assembly: OwinStartupAttribute(typeof(SignalRTwitterDemo.Startup))] namespace SignalRTwitterDemo { public partial class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR(); ConfigureAuth(app); } } } Once the line ( app.MapSignalR();) was included in the Startup, we can move towards the UI. The Home Index The User Interface isn't anything to get excited about. I stripped off the extra DIVs and kept the menu at the top. The screen has two buttons on it: One to activate the Twitter stream and one to turn it off. /Views/Home/Index.cshtml @{ ViewBag.Turn on the firehose</button> <button id="firehoseOff" class="btn btn-danger">Turn it off! Turn it off!</button> <label id="firehoseStatus"></label> <div class="tweets"> </div> @section scripts { <script src="/Scripts/jquery-2.2.3.min.js" type="text/javascript"></script> <script src="/Scripts/jquery.signalR-2.2.0.min.js" type="text/javascript"></script> <script src="/signalr/hubs" type="text/javascript"></script> <script src="/Scripts/twitterLive.js" type="text/javascript"></script> } I also added a "firehoseStatus" to show the status of whether the stream was running or not and the div.tweets will contain all of our tweets happening in real-time. The three section scripts are standard and required for SignalR to run properly. The fourth file, twitterLive.js is our own homebrew JavaScript. We'll get back to that in a minute. For now, let's focus on the server-side functionality. Stream and Data One of the issues I ran into with this particular technique was keeping track of a twitter stream and knowing how to shut it off. Since SignalR is asynchronous and we were making asynchronous calls to Twitter, this caused a little bit of a problem. After doing some research, I found out there was a ConcurrentDictionary class available and decided to use it to keep track of Twitter data along with Cancellation Tokens. CancellationTokens are notifications that propagate to other operations letting it know that it should be canceled. So we need to contain the data in a class of some kind. Hubs\TwitterTaskData.cs public class TwitterTaskData { public string Id { get; set; } public string Status { get; set; } public IOEmbedTweet Tweet { get; set; } [JsonIgnore] public CancellationTokenSource CancelToken { get; set; } } The IOEmbedTweet is a bonus from TweetInvi. When the tweet is received, TweetInvi creates a Twitter card and sends the HTML as a IOEmbedTweet. Once we create it, we just send it over to the client and have it display it. This leads us to our TwitterStream class. Hubs\TwitterStream.cs public static class TwitterStream { private static IUserStream _stream; private static readonly IHubContext _context = GlobalHost.ConnectionManager.GetHubContext<TwitterHub>(); public static async Task StartStream(CancellationToken token) { // Go to to get your own tokens. Auth.SetUserCredentials("CONSUMER_KEY", "CONSUMER_SECRET", "ACCESS_TOKEN", "ACCESS_TOKEN_SECRET"); if (_stream == null) { _stream = Stream.CreateUserStream(); // Other events can be used. This is just on YOUR twitter feed. _stream.TweetCreatedByAnyone += async (sender, args) => { if (token.IsCancellationRequested) { _stream.StopStream(); token.ThrowIfCancellationRequested(); } // let's use the embeded tweet from tweetinvi var embedTweet = Tweet.GenerateOEmbedTweet(args.Tweet); await _context.Clients.All.updateTweet(embedTweet); }; // If anything changes the state, update the UI. _stream.StreamPaused += async (sender, args) => { await _context.Clients.All.updateStatus("Paused."); }; _stream.StreamResumed += async (sender, args) => { await _context.Clients.All.updateStatus("Streaming..."); }; _stream.StreamStarted += async (sender, args) => { await _context.Clients.All.updateStatus("Started."); }; _stream.StreamStopped += async (sender, args) => { await _context.Clients.All.updateStatus("Stopped (event)"); }; await _stream.StartStreamAsync(); } else { _stream.ResumeStream(); } await _context.Clients.All.updateStatus("Started."); } } At the beginning of this method, you'll notice we have a placeholder for your credentials for Twitter. Set your credentials using the tokens from apps.twitter.com and place those in the code (No, I'm not giving you mine). ;-) Next, we set up an event to receive a tweet when created by anyone. However, if we receive a CancellationToken, we need to stop processing immediately. When we receive the tweet, we create an embedded Tweet and send it to the client's method, updateTweet. We also want our users to have a good user experience so we also add the events when a Twitter stream is paused, resumed, started, or stopped. Finally, we kick off an asynchronous stream and send a status of "Started." to the client through the updateStatus JavaScript. Making the Hub The hub is the most important piece and also the easiest once everything is in place. All that's needed for this to function is the ConcurrentDictionary defined at the top and our twitter start and stop methods. Hubs\TwitterHub.cs [HubName("twitterHub")] public class TwitterHub : Hub { private static ConcurrentDictionary<string, TwitterTaskData> _currentTasks; private ConcurrentDictionary<string, TwitterTaskData> CurrentTasks { get { return _currentTasks ?? (_currentTasks = new ConcurrentDictionary<string, TwitterTaskData>()); } } public async Task StartTwitterLive() { var tokenSource = new CancellationTokenSource(); var taskId = string.Format("T-{0}", Guid.NewGuid()); CurrentTasks.TryAdd(taskId, new TwitterTaskData { CancelToken = tokenSource, Id = taskId, Status = "Started." }); await Clients.Caller.setTaskId(taskId); var task = TwitterStream.StartStream(tokenSource.Token); await task; } public async Task StopTwitterLive(string taskId) { if (CurrentTasks.ContainsKey(taskId)) { CurrentTasks[taskId].CancelToken.Cancel(); } await Clients.Caller.updateStatus("Stopped."); } } You may be wondering why we send the taskId to the client. This is to make sure we have a matching taskId to a Twitter stream in case they want to turn it off. Finally, the JavaScript The SignalR JavaScript is just a collection of the server-side C# methods. Scripts\twitterLive.js $(function () { var twitterHub = $.connection.twitterHub; twitterHub.client.setTaskId = function (id) { $("#firehoseOff").attr("data-id", id); } twitterHub.client.updateStatus = function (status) { $("#firehoseStatus").html(status); } twitterHub.client.updateTweet = function (tweet) { $(tweet.HTML) .hide() .prependTo(".tweets") .fadeIn("slow"); }; $("#firehoseOn").on("click", function () { twitterHub.server.startTwitterLive(); }); $("#firehoseOff").on("click", function () { var id = $(this).attr("data-id"); twitterHub.server.stopTwitterLive(id); }); $.connection.hub.start(); }); As you can see, the setTaskId targets the #firehoseOff button and sets the attribute data-id to the taskId. We also set the updateStatus by the Html function. Remember the OEmbedTweet? The updateTweet receives that object and includes an HTML property that we turn into a jQuery object, hide it, prepend it to the div.tweets, and fade it in slowly. The FirehoseOn button calls the C# method startTwitterLive and kicks off the Twitter stream while the FirehoseOff button grabs the taskId off it's tag and sends it over to the C# stopTwitterLive method. Conclusion Today, I presented a way for you to make asynchronous calls from SignalR to show a real-time Twitter stream. You can easily change the code to create a progress bar, a file upload, or even another task that can work in the background. While my strong suit is not threading, I was happy to get this working to share with my readers. Did you follow this project? Would you have done it differently? Post your comments below. Always love a good discussion.
https://www.danylkoweb.com/Blog/how-to-create-a-real-time-twitter-stream-with-signalr-FB
CC-MAIN-2017-17
refinedweb
1,387
50.94
In the grand old tradition of C programming books, I'll begin by showing you how to display "Hello, World!" from a Java program. However, before I proceed, I should point out the difference between three types of Java programs: Standalone Java application-This is a Java program that can be executed by a Java interpreter. A Java standalone application is just like the standalone C or C++ programs that you might know. The application has a main method and the java interpreter can run the application. General-purpose Java applications take the form of standalone applications. Java applet-This is a Java class that can be loaded and executed by the appletviewer program or by a Java-capable Web browser. You have to first embed the applet inside an HTML document using the <applet> tag and then load that HTML document to activate the applet. As a Webmaster, you'll mostly write Java applets. Java servlet-This is a Java class loaded and executed by a Web server (or a server that helps the Web server such as the Apache Tomcat server). To develop servlets, you need the Java 2 Enterprise Edition. Whether you write a standalone application or an applet or a servlet, the basic steps are similar except for the last step when you execute the program. You use the java interpreter to run standalone programs and appletviewer to test applets. For servlets, you simply place them in a specific directory and insert HTML code to refer to the servlets. These development steps are as follows: Use a text editor to create the Java source file. A Java application is a class (a type of object-collection of data and methods). The source file's name must be the class name with a .java extension. Process the source file with javac (the Java compiler) to generate the class file with a .class extension. If it's a standalone application, run the class with the java interpreter with the following command: java classname If the program is an applet, create an HTML document and embed the applet using the <applet> tag (see 'The <applet> Tag' section for the syntax). Then, load that HTML document using appletviewer or a Java-capable Web browser. When the browser loads the HTML document, it also activates all embedded Java applets. The classic C-style "Hello, World!" application is easy to write in Java. That's because many of Java's syntactical details are similar to that of C and C++. The following code implements a simple "Hello, World!" program in Java (you have seen this in an earlier section, but it's so short that I'll just show you again): public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } } Even this simple Java program illustrates some key features of Java: Every Java program is a public class definition. A standalone application contains a main method that must be declared public static void. The interpreter starts execution at the main method. Save this program in a file named HelloWorld.java. Then, compile it with the following command: javac HelloWorld.java The javac compiler creates a class file named HelloWorld.class. To run the application, use the Java interpreter and specify the class name as a parameter, like this: java HelloWorld The program prints the following output: Hello, World! The other model of a Java program is the applet that runs inside the appletviewer program or a Java-capable Web browser. Specifically, a Java applet is a class in the Java Abstract Windowing Toolkit (AWT). In an applet, you do not have to provide a main method. Instead, you provide a paint method where you place code to draw in an area of a window. You can use the applet model to implement GUIs and other graphical programs. For a "Hello, World!" applet, I'll do the following: Instead of displaying the message in a default font, pick a specific font to display the message. Use the information about the font sizes to center the message within the area where the applet is required to display its output. Draw the text in red instead of the default black color. Listing 26-1 shows the Java code that implements the "Hello, World!" applet. //--------------------------------------------------------------- // File: HelloWorld.java // // Displays "Hello, World!" in Helvetica font and in red color. //--------------------------------------------------------------- import java.applet.*; import java.awt.*; //--------------------------------------------------------------- // H e l l o W o r l d // // Applet class to display "Hello, World!" public class HelloWorld extends java.applet.Applet { String hellomsg = "Hello, World!"; //--------------------------------------------------------------- // p a i n t // // Method that paints the output public void paint(java.awt.Graphics gc) { // Draw a rectangle around the applet's bounding box // so we can see the box. gc.drawRect(0, 0, getSize().width-1, getSize().height-1); // Create the font to be used for the message. Font helv = new Font("Helvetica", Font.BOLD, 24); // Select the font into the Graphics object. gc.setFont(helv); // Get the font metrics (details of the font size). FontMetrics fm = gc.getFontMetrics(); int mwidth = fm.stringWidth(hellomsg); int ascent = fm.getAscent(); int descent = fm.getDescent(); // Compute starting (x.y) position to center string // The size() method returns size of the applet's // bounding box. int xstart = getSize().width/2 - mwidth/2; int ystart = getSize().height/2 + ascent/2 - descent/2; // Set the color to red. gc.setColor(Color.red); // Now draw the string. gc.drawString(hellomsg, xstart, ystart); } } By browsing through this code, you can learn a lot about how to display graphics output in Java. Here are the key points to note: The import statement lists external classes that this program uses. The name that follows the import statement can be the name of a class or can be a name with a wildcard (*), which tells the Java compiler to import all the classes in a package. This example uses the Applet class as well as a number of graphics classes that are in the java.awt package. The HelloWorld applet is defined as an extension of the Applet class. That's what the statement public class HelloWorld extends java.applet.Applet means. An applet's paint method contains the code that draws the output. The paint method receives a Graphics object as argument. You have to call methods of the Graphics object to display output. The getSize method returns the size of the applet's drawing area. To use a font, you have to first create a Font object and then call the setFont method of the Graphics object. To draw text in a specific color, invoke the setColor method of the Graphics object with an appropriate Color object as argument. If you know C++, you'll notice that Java's method invocation is similar to the way you call the member function of a C++ object. Indeed, there are many similarities between C++ and Java. Save the listing in a file named HelloWorld.java. Then, compile it with the command: javac HelloWorld.java This step creates the applet class file: HelloWorld.class. To test the applet, you have to create an HTML document and embed the applet in that document, as shown in the following example: <html> <head> <title>Hello, World! from Java</title> </head> <body> <h3>"Hello, World!" from Java</h3> A Java applet is given an area where it displays its output. In this example, the applet draws a border around its assigned area and then displays the "Hello, World!" message centered in that box. <br> <applet code=HelloWorld width=200 height=60> If you see this message, then you do not have a Java-capable browser. </applet> Here is the applet! </body> </html> As this HTML source shows, you have to use the <applet> tag to insert an applet in an HTML document. In the 'Using the <applet> Tag' section, you'll learn the detailed syntax of the <applet> tag. You can use two tools to test the applet. The first one is appletviewer, which comes with JDK. To view the HelloWorld applet, you have to run the appletviewer program and provide the name of the HTML document that includes the applet. Suppose that the HTML document is in the file named hello.html. Then, you'd run appletviewer with the following command: appletviewer hello.html Figure 26-1 shows how the appletviewer displays the applet. Notice that appletviewer displays only the applet; the rest of the text in the HTML document is ignored. However, the appearance is quite different in a Java-capable Web browser. To view the applet in a Web browser, start the Mozilla Web browser and select File>Open File from the menus. From the Open File dialog, go to the directory where you have the hello.html file and the HelloWorld.class file (for the applet). Then, select the hello.html file, and click Open. Mozilla then renders the HTML document containing the HelloWorld applet. Unlike the appletviewer, Mozilla should display the entire HTML document and the Java applet appears embedded in the document just like an image. The applet draws inside the rectangle assigned to it through the width and height attributes of the <applet> tag.
http://etutorials.org/Linux+systems/red+hat+linux+9+professional+secrets/Part+V+Programming+Red+Hat+Linux/Chapter+26+Java+Programming/Writing+Your+First+Java+Program/
CC-MAIN-2017-30
refinedweb
1,528
66.44
Is it possible to start a node.js app from within a python script on a raspberry pi? On the command line I run sudo node myscript.js The first line of file shall be: #!/usr/bin/python You can call command with subprocess.call: from subprocess import call # Note that you have to specify path to script call(["node", "path_to_script.js"]) Then you have to set +x permissions for file to be executable: chmod +x filename.py Know you are ready to go: ./filename.py Note: checkout Raspberry Pi Stack Exchange, you can find a lot of use full info there.
https://codedump.io/share/Pipew69GPNZ2/1/start-node-app-from-python-script
CC-MAIN-2016-44
refinedweb
102
77.94
/> Earlier. Why are there two types of multidimentional arrays? What is the difference between the (x)(y) and (x,y) notation? dim arrBar arrBar = Array(Array(1,2), Array(3,4)) response.write arrBar(0)(0) & "<BR>" response.write arrBar(0)(1) & "<BR>" response.write arrBar(1)(0) & "<BR>" response.write arrBar(1)(1) & "<BR>" dim arrFoo (1,1) arrFoo (0,0) = 1 arrFoo (0,1) = 2 arrFoo (1,0) = 3 arrFoo (1,1) = 4 response.write arrFoo(0,0) & "<BR>" response.write arrFoo(0,1) & "<BR>" response.write arrFoo(1,0) & "<BR>" response.write arrFoo(1,1) & "<BR>" ‘these don’t work response.write arrFoo(0)(0) response.write arrBar(0,0) ya google is realy more used me , here lot of answer is described simple way to understand thanks ur website help I understand the issues with converting an existing JScript array to a VBArray. But would it be possible to include within JScript an straight forward way to create a new VBArray? For example, I am trying to execute IADs::GetInfoEx( <vbarray>, 0 ); I have tried every which way to create a valid VBArray this method will actually accept, such as using the Scripting.Dictionary – Items() method. No such luck. In the end I had to convert the script to wsf and write a VB Function to handle this part, while the rest is in JScript. It would be nice to natively and easily create these types of objects.. Thanks, Daren Yes, it would be nice. Unfortunately, I never did get around to implementing that in JScript Classic. Sorry about that. I did add much better support for arrays in JScript .NET though. (Small consolation, I know.) I’m one of those crazed users who’s trying to figure out how to pass a JScript array argument to C# code. If I cast the argument as an object, its type is System.__COMObject, so I assume I need to marshal it somehow. I currently am exposing JScript methods to my C# code by implementing a subclass of IDispatch, called ICustomMethods, and registering my C# application: public void GetExternal([MarshalAs(UnmanagedType.IDispatch)] out object ppDispatch) { ppDispatch = this as ICustomMethods; } From JScript, it just requires calling window.external.MyCustomMethod(…); So, the problem is that one of these JScript methods I need to support has a JScript array argument, and I don’t know how to marshal it. Any ideas? Thanks much! David A JScript array is a dispatch object which when invoked on its indexes, gets/sets the results. In C++ what you’d do to simulate arr[123], for instance, is you’d get the dispid for "123", and invoke on that dispid. How you’re going to do that in C# beats the heck out of me. We designed JScript arrays years before C# was a gleam in Anders’ eye. If you figure something out, let me know. Something like this works for me In Html file’s Javascript window.external.ArrayReceivingFunction([‘Arg1’, ‘Arg2’, ‘Arg3’,etc.]); My C# "window.external" class public class WindowExternal { public void ArrayReceivingFunction(object argArray) { object[] objArr = DispatchHelpers.GetObjectArrayFrom__COMObjectArray(argArray); } } My C# Dispatch Helpers class internal class DispatchHelpers { internal static object[] GetObjectArrayFrom__COMObjectArray(object comObject) { int length = Convert.ToInt32(GetPropertyValue(comObject, "length")); object[] objArr = new object[length]; for (int idx = 0; idx < length; idx++) { objArr[idx] = GetPropertyValue(comObject, idx.ToString()); } return objArr; } internal static object GetPropertyValue(object dispObject, string property) { object propValueRef = null; try { Type type = dispObject.GetType(); propValueRef = type.InvokeMember(property, BindingFlags.GetProperty, null, dispObject, null); } catch (COMException) { /* Throws COMException on invalid property */ } return propValueRef; } } By the way Eric your info "A JScript array is a dispatch object which when invoked on its indexes, gets/sets the results." was invaluable assistance Hey, so I think I managed to hack together an okay solution. It requires a bit of juggling but it manages to preserve elements’ datatypes (for primitives only). Basically you create vbscript functions for manipulating the VBScript array, and then a JScript function which calls these manipulation functions.. Here’s the vbscript code: ‘ return a blank dynamic array function getBlankArray() getBlankArray = Array() end function ‘ append arr with the specified val function appendArray(arr, val) redim preserve arr(ubound(arr) + 1) arr(ubound(arr)) = val appendArray = arr end function And here’s the JScript portion. I extended the Array class, but you could have a seperate function. Array.prototype.toVBArray = function() { //call our vbscript function to give us a blank VBArray var vbarray = getBlankArray(); for(var i = 0; i < this.length; i++) { //use our vbscript manipulation function to append the ith item vbarray = (appendArray(vbarray, this[i])); } return(vbarray); } With that in place, you should be able to convert simple jscript arrays to VBArrays, a la var tmp = [1, 2, 3.14, ‘hi mom’]; tmp.toVBArray(); Playing with jscript here, and wondering if there’s a way to get at the KEYS of a jscript associative array from vbscript. MSE7 seems to know what the keys are, and using For Each, i seem to be able to iterate the VALUES in the array, but I can’t seem to hit the keys, though the IDE seems to know they’re there. Am I out of luck with this one? I know it’s probably a long shot… You are out of luck. There is no equivalent to the JScript for-in in VBScript. Sorry!]= ‘dato’+i; ar[i,1]= (i+1)*10; } if(ar!=null) { obj=document.getElementById(‘Test </script>). The method from Bim Jim seems good, however I received objects System.__COMObject by reference and I don’t know how to get their value (as string for example). Is someone has an idea ? Adding to agent009 JScript array to VBArray function, I just access JScript arrays using dot notation. x.[0] ‘Access the first element of JScript array within VBScript. Works on XP SP3/IE 8 anyways.
https://blogs.msdn.microsoft.com/ericlippert/2003/09/22/jscript-and-vbscript-arrays/
CC-MAIN-2016-50
refinedweb
975
57.87