text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
My last React post was a while back, actually in my day job I spend most of my time writing React applications so let's go back to React to have some fun. d3 is actally really nice and React itself works really well with SVG on its own, however when you combine React and D3 you get the issue of "Who is managing the DOM"? Since both React and D3 manipulate the DOM, if we are not careful we can lose all benefits of React (with shadow DOM and diffing). So we have got two powerful libraries fighting over the control of UI state and DOM.
The solution to this problem lies in allowing React + Redux to manage the overall DOM + state of the application while allowing D3 components to manage their own little area of the DOM and their own state (like a tiny stateful component).
To explain this we will create a React component, that can be used in any React application and within the component we will allow D3 to create and animate the underlying SVG. So let's build a simple animated loading indicator. Here is the code -
import * as React from 'react'; import * as d3 from 'd3'; interface Props { visible: boolean; } interface StateProps { x: number; timer: number; } class LoadingSpinner extends React.Component<Props> { // tslint:disable-next-line:no-any svgRef: React.RefObject<any>; state: StateProps; constructor(props: Props) { super(props); this.svgRef = React.createRef(); this.state = { x: 1, timer: window.setInterval(() => props.visible && this.updateState(), 100) }; } updateState() { // ***** line 28 ***** this.setState({ x: this.state.x <= 6 ? this.state.x + 1 : 1 }); d3.select(this.svgRef.current) .select('#loading-spinner-rotator path') .transition() .duration(100) .attr('transform', 'translate(100, 100)') .attr('transform', `rotate(${45 * this.state.x})`); } componentDidMount() { // ***** line 41 ***** const svgWidth = 35, svgHeight = 35; const svg = d3 .select(this.svgRef.current) .attr('width', svgWidth) .attr('height', svgHeight); const arcBase = d3.arc() .innerRadius(10) .outerRadius(15) .startAngle(0) .endAngle(2 * Math.PI); const arcRotator = d3.arc() .innerRadius(10) .outerRadius(15) .startAngle(0) .endAngle(0.25 * 2 * Math.PI); svg .append('g') .attr('transform', 'translate(20, 20)') .append('path') .attr('d', arcBase) .attr('fill', '#ccc'); svg .append('g') .attr('id', 'loading-spinner-rotator') .attr('transform', 'translate(20, 20)') .append('path') .attr('d', arcRotator) .attr('fill', '#F76560'); } componentWillUnmount() { clearInterval(this.state.timer); } render() { if (!this.props.visible) { return <span />; } return ( // ***** line 85 ***** <svg id="loading-spinner" ref={this.svgRef} /> ); } } export default LoadingSpinner;
The core logic here is -
- Line 85: With React we create a placeholder SVG.
- Line 41: With the component mounted, we allow d3 to take over, we use a React ref to get hold of the SVG DOM and start creating arcs etc.
- Line 28: Using localized state and a timer, we update the SVG periodically with d3 and add some simple transitions.
I do not recommend connecting the local state of the d3 component to Redux since this state is pretty isolated and only manages a small part of the DOM for animation (it has no business logic as well). The end result is a nice "Ironman" like animated SVG loader, written in pure JavaScript that can be used as a React component anywhere.
You can also check out the working code here.
What is cool about this approach is that we can create highly reusable & customizable D3 components, e.g. a graph component that takes in data as React props and outputs a clean SVG bar graph built with d3. | https://rockyj.in/2018/05/01/react_d3.html | CC-MAIN-2018-30 | en | refinedweb |
For example I have handler:
@Component
public class MyHandler {
@AutoWired
private MyDependency myDependency;
public int someMethod() {
...
return anotherMethod();
}
public int anotherMethod() {...}
}
to testing it I want to write something like this:
@RunWith(MockitoJUnitRunner.class}
class MyHandlerTest {
@InjectMocks
private MyHandler myHandler;
@Mock
private MyDependency myDependency;
@Test
public void testSomeMethod() {
when(myHandler.anotherMethod()).thenReturn(1);
assertEquals(myHandler.someMethod() == 1);
}
}
But it actually calls
anotherMethod() whenever I try to mock it. What should I do with
myHandler to mock its methods?
First of all the reason for mocking MyHandler methods can be the following: we already test
anotherMethod() and it has complex logic, so why do we need to test it again (like a part of
someMethod()) if we can just
verify that it's calling?
We can do it through:
@RunWith(MockitoJUnitRunner.class} class MyHandlerTest { @Spy @InjectMocks private MyHandler myHandler; @Mock private MyDependency myDependency; @Test public void testSomeMethod() { doReturn(1).when(myHandler).anotherMethod(); assertEquals(myHandler.someMethod() == 1); verify(myHandler, times(1)).anotherMethod(); } }
In your code, you are not testing MyHandler at all. You don't want to mock what you are testing, you want to call its actual methods. If MyHandler has dependencies, you mock them.
Something like this:
public interface MyDependency { public int otherMethod(); } public class MyHandler { @AutoWired private MyDependency myDependency; public void someMethod() { myDependency.otherMethod(); } }
And in test:
private MyDependency mockDependency; private MyHandler realHandler; @Before public void setup() { mockDependency = Mockito.mock(MyDependency.class); realHandler = new MyHandler(); realhandler.setDependency(mockDependency); //but you might Springify this } @Test public void testSomeMethod() { //specify behaviour of mock when(mockDependency.otherMethod()).thenReturn(1); //really call the method under test realHandler.someMethod(); }
The point is to really call the method under test, but mock any dependencies they may have (e.g. calling method of other classes)
If those other classes are part of your application, then they'd have their own unit tests.
NOTE the above code could be shortened with more annotations, but I wanted to make it more explicit for the sake of explanation (and also I can't remember what the annotations are :) ) | http://m.dlxedu.com/m/askdetail/3/f94d3728685f35e7e8da18af805cfc32.html | CC-MAIN-2018-30 | en | refinedweb |
The queries outstanding for the libunbound resolver. More...
#include <context.h>
The queries outstanding for the libunbound resolver.
These are outstanding for async resolution. But also, outstanding for sync resolution by one of the threads that has joined the threadpool.
answer message, result from resolver lookup.
Referenced by add_bg_result(), context_query_delete(), libworker_do_cmd(), process_answer_detail(), ub_cancel(), ub_resolve(), ub_resolve_async(), and ub_stop_bg().
resulting message length.
Referenced by add_bg_result(), process_answer_detail(), and ub_resolve().
result structure, also contains original query, type, class.
malloced ptr ready to hand to the client.
Referenced by add_bg_result(), context_query_delete(), context_serialize_answer(), context_serialize_new_query(), libworker_fillup_fg(), process_answer_detail(), setup_qinfo_edns(), and ub_resolve(). | http://www.unbound.net/documentation/doxygen/structctx__query.html | CC-MAIN-2018-30 | en | refinedweb |
Basically what this means is you are missing project templates for the project type you are trying to load. This could be the notorious Web Application Project type that was not a default project type in Visual Studio 2005 or it could be something like the Visual Studio extensions for .NET Framework 3.0. Either way you need to find out what project type you are missing and then get it installed.
Check your csproj file
- Open the csproj file in Notepad.
- Find the
node . It is under .
- This is the GUID that corresponds to a project template type.
- If you are fortunate enough to have access to the machine where the project was created, you can open the registry and then find what type of project template it is. Otherwise you can try Google or consult my ongoing list below.
Check your Registry
- Open Start - Run, type regedit, and click OK.
- Navigate to: HKLM\SOFTWARE\Microsoft\VisualStudio\8.0\Projects
- Here you will find a list of all your project templates that are installed. (Note: These are all installed in "[Install Directory]\Program Files\Microsoft Visual Studio 8\Common7\IDE\ProjectTemplates")
- The Default string value data is the name of the project you need. You can usually
Alternate Resolution
If you have access to both machines you are trying to synchronize, one alternative is to open Visual Studio and click on About Microsoft Visual Studio from the Help menu. From here you can compare versions and installed products. However, you may end up spending a lot of time installing various products when you only need one thing installed.
Get List of Installed Visual Studio Project Templates using C#
This is a little C# code snippet you can use to get a list of installed project templates:
using Microsoft.Win32;
private string GetProjectTemplateTypes()
{
string projectTemplateRegDir = @"SOFTWARE\Microsoft\VisualStudio\8.0\Projects";
string[] projectGuids =
{
string projectTemplateRegDir = @"SOFTWARE\Microsoft\VisualStudio\8.0\Projects";
string[] projectGuids =
Registry.LocalMachine.OpenSubKey(projectTemplateRegDir).GetSubKeyNames();
string projectTemplateList = "";
foreach (string projectGuid in projectGuids)
{
projectTemplateList +=
projectGuid + "\t" +
string projectTemplateList = "";
foreach (string projectGuid in projectGuids)
{
projectTemplateList +=
projectGuid + "\t" +
Registry.LocalMachine.OpenSubKey(projectTemplateRegDir +
"\\" + projectGuid).GetValue("") + "\r\n";
}
return projectTemplateList;
}
return projectTemplateList;
}
Visual Studio Project Template GUID List
This is a list of project templates I have installed on my machine. This is not a complete list, but gives you all of the basic ones plus some of extras like .NET 3.0 templates. If you have any new ones, please send them and I will add them.
You are right that "project type is not supported." error provide insufficient information.I always felt banging my head whenever this error is shown.But again no use.You blog is a real help for this. | http://johnlivingstontech.blogspot.com/2007/07/visual-studio-project-template-guids.html | CC-MAIN-2018-30 | en | refinedweb |
Hi,
I'm creating a stack to save math operations that have occurred so that I can use an undo button to remove those operations. I followed this implementation: tionscript/#comment-71
I created two actionscript class files Node.as and Stack.as like what was done in the link but I do not know how to create a variable of that class in the Script area of my .mxml file. My code looks like this:
<fx:Script>
<![CDATA[
import dataStore.Node;
import Stack;
var storage:dataStore.Stack;
]]>
</fx:Script>
I was trying to create a variable called storage of type Stack but I get an error saying "1046: Type was not found or was not a compile-time constant: Stack."
Any information on how to correctly create a variable of a stack class in the Script of my main MXML file would be very helpful.
Thanks
According to your code, "Stack" is in your "src" (root) project folder, not the "dataStore" folder (the package name is incorrect). So you'd use "var storage:Stack;". | https://forums.adobe.com/thread/822096 | CC-MAIN-2018-30 | en | refinedweb |
java.util.*;interface Pet {void speak();}class Rat implements Pet {public void speak() { System.out.println("Squeak!"); }}class Frog implements Pet {public void speak() { System.out.println("Ribbit!"); }}public class InternalVsExternalIteration {public static void main(String[] args) {List<Pet> pets = Arrays.asList(new Rat(), new Frog());for(Pet p : pets) // External iterationp.speak();pets.forEach(Pet::speak); // Internal iteration}}
The forloop represents external iteration and specifies exactly how it is done. This kind of code is redundant, and duplicated throughout our programs. With the forEach, however, we tell it to call speak(here, using a method reference which is more succinct than a lambda) for each element, but we don’t have to specify how the loop works. The iteration is handled internally, inside the forEach.
This “what not how” is the basic motivation for lambdas. But to understand closures, we must look more deeply, into the motivation for functional programming itself.Functional Programming
Lambdas/Closures are there to aid functional programming. Java 8 is not suddenly a functional programming language, but (like Python) now has some support for functional programming on top of its basic object-oriented paradigm.
The core idea of functional programming is that you can create and manipulate functions, including creating functions at runtime. Thus, functions become another thing that your programs can manipulate (instead of just data). This adds a lot of power to programming.
A purefunctional programming language includes other restrictions, notably data invariance. That is, you don’t have variables, only unchangeable values. This sounds overly constraining at first (how can you get anything done without variables?) but it turns out that you can actually accomplish everything with values that you can with variables (you can prove this to yourself using Scala, which is itself nota pure functional language but has the option to use values everywhere). Invariant functions take arguments and produce results without modifying their environment, and thus are much easier to use for parallel programming because an invariant function doesn’t have to lock shared resources.
Before Java 8, the only way to create functions at runtime was through bytecode generation and loading (which quite messy and complex).
Lambdas provide two basic features:
More succinct function-creation syntax.
The ability to create functions at runtime, which can then be passed/manipulated by other code.
Closures concern this second issue.What is a Closure?
A closure uses variables that are outside of the function scope. This is not a problem in traditional procedural programming – you just use the variable – but when you start producing functions at runtime it does become a problem. To see the issue, I’ll start with a Python example. Here, make_fun()is creating and returning a function called func_to_return, which is then used by the rest of the program:# Closures.pydef.
I asked why the feature wasn’t just called “closures” instead of “lambdas,” since it has the characteristics of a closure? The answer I got was that closure is a loaded and ill defined term, and was likely to create more heat than light. When someone says “real closures,” it too often means “what closure meant in the first language I encountered with something called closures.”
I don’t see an OO versus FP (functional programming) debate here; that is not my intention. Indeed, I don’t really see a “versus” issue. OO is good for abstracting over data (and just because Java forces objects on you doesn’t mean that objects are the answer to every problem), while FP is good for abstracting over behavior. Both paradigms are useful, and mixing them together has been even more useful for me, both in Python and now in Java 8. (I have also recently been using Pandoc, written in the pure FP Haskell language, and I’ve been extremely impressed with that, so it seems there is a valuable place for pure FP languages as well). | http://m.dlxedu.com/m/detail/5/433932.html | CC-MAIN-2018-30 | en | refinedweb |
UPDATE! Even better than listing out the individual errors, Phil Gyford has posted his working code for the examples in James Bennet’s “Practical Django Projects 2nd Edition“. You can find it on bitbucket here. Thanks Phil!
Since a quick Google search failed to turn up these e-book errate for James Bennets informative “Practical Django Projects 2nd Edition”, I’ll compile my own list. Hopefully my frustration in overcoming these errors will save you from the same.
Chapter 4, page 66:
(r'^weblog/(?P<year>\d{4})/(?P<month>\w{3})/(?P<day>\d{2})/(P?<slug>[-\w]+)/$', 'coltrane.views.entry_detail'),
should be:
(r'^weblog/(?P<year>\d{4})/(?P<month>\w{3})/(?P<day>\d{2})/(?P<slug>[-\w]+)/$', 'coltrane.views.entry_detail'),
(note the “P?” vs. “?P” before <slug>)
Chapter 4, page 70
Author should mention that the following must be added to the top of urls.py once you switch to generic views:
from coltrane.models import Entry
Chapter 4, page 71 and 73
Each of the four urlpatterns which include:
weblog/(?P<year>\d{4}/
Should actually be:
weblog/(?P<year>\d{4})/
(note the “)” after the {4})
7 Responses to “Errata: Practical Django Projects 2nd Edition (PDF)”
A big thank you for the error on the pattern on chapter 4 , page 66. Was struggling with that.
Just beginning Django , you saved me quite some time !!!
Christophe
Chapter 4 page 69:
(r’^weblog/(?P\d{4})/(?P\w{3})/(?P\d{2})/(?[-\w]+)/$,
‘django.views.generic.date_based.object_detail’, entry_info_dict)
should be
(r’^weblog/(?P\d{4})/(?P\w{3})/(?P\d{2})/(?P[-\w]+)/$,
‘django.views.generic.date_based.object_detail’, entry_info_dict)
Note the ?P instead of ?
for the slug group
Chapter 4, page 71
(r’^admin/’, include(admin.site_urls)),
should be
(r’^admin/’, include(admin.site.urls)),
Thank you so much for posting this up, I was pulling my hair over this and finally you solved it for me thank you again
chapter 6 page 118 and page 120, the
Entry.objects.all()[:5]
instead by
Entry.live.all()[:5] I can’t find any documents about models.live from docs.djangoproject.com. How about that?
Hey Mitch, chapter 4 page 66 error just saved a lot of hair on my head. Thanks ! | http://mitchfournier.com/2010/03/08/errata-practical-django-projects-2nd-edition-pdf/ | CC-MAIN-2018-30 | en | refinedweb |
Instructions for Adding PDF Bookmarks Using Word
- Blaise Dorsey
- 1 years ago
- Views:
Transcription
1 Instructions for Adding PDF Bookmarks Using Word These instructions show how to set up a Word document so that PDF bookmarks are automatically created when the document is converted to a PDF. PDF bookmarks can be automatically created in Word by using Styles. Word has multiple preformatted styles that can be applied to a document. A style is a set of formatting characteristics, such as font name, size, color, paragraph alignment and spacing, that can quickly and easily applied to a section of a document, or to the whole document. The preformatted styles can also be modified to a user s preference. To create PDF bookmarks, you will need to use the Heading formats in the Styles menu, which is explained below. By applying the Heading styles to the heading and subheadings in your brief, you will be able to automatically create PDF bookmarks when the Word document is converted to PDF. Using the Heading styles will also allow you to easily create a table of contents, which will be covered in a separate document. If you have not used Word Styles before, begin this process with the final version of your brief with your preferred formatting. In the future, you will be able to use the Styles you create to format the headings and subheadings in your brief as you are drafting the document. These instructions were created using Microsoft Word Table of Contents Instructions for Adding PDF Bookmarks Using Word... 1 Marking the Headings and Subheadings... 2 Marking Text that is Not a Heading or Subheading... 6 Publish to PDF
2 Marking the Headings and Subheadings Highlight the first heading in your brief. This example will use the Table of Authorities. Use your curser to highlight the Table of Authorities text. Go to the Home tab in the menu at the top of the page. You will be working with the Styles section on the right-hand side of the page. With the Table of Authorities heading still highlighted, hold the curser over Heading 1. If you have not used Styles before, Heading 1 is probably not the format (font, size, color, spacing, etc.) that you want to use for the main headings in your brief. You can update Heading 1 to match the formatting that you created for the main 2
3 headings in your brief by right-clicking your mouse while the curser is over Heading 1. A menu should appear, and the first item on the list should state Update Heading 1 to Match Selection. Select this option. The Heading 1 option in the Styles menu will be updated to match your existing formatting: Now you can mark the rest of the main headings in the brief as Heading 1. Go to each main heading in the brief (Table of Contents, Table of Authorities, Statement of Appealability, Statement of Case, Argument, etc.), select the text, and click on Heading 1 in the Styles menu. The formatting for these main headings will now be the same. You can use the same process for the subheadings in your brief. Go to the first one of the second level headings in your brief and select the text with your curser. This time, place your curser over Heading 2 in the Styles menu (you may need to use the arrows on the right-hand side to scroll down). Update Heading 2 to match the formatting that you created for the second subheadings in your brief by right-clicking your mouse while the curser is over Heading 2 and selecting Update Heading 2 to Match Selection. Go through your brief and mark all your second level subheadings as Heading 2. For additional 3
4 levels of subheadings, continue this process using Heading 3, Heading 4, Heading 5, etc. Saving your formatting selections for future documents: If you want the changes that you have made to the heading styles to become the default for future Word documents, follow these steps. In the Styles menu, highlight the Heading with the format you wish to save for future use. 4
5 Right-click and select Modify from the menu. The following window should appear: Select New documents based on this template and click on OK. The formatting you created for the heading should now be saved for use in new documents. You will need to repeat this step for each heading level (Heading 1, Heading 2, Heading 3, etc.) that you created. 5
6 Marking Text that is Not a Heading or Subheading As you work your way through your brief, you may find that there is text that should be included as a PDF bookmark, but that it should not be formatted the same as the heading or subheadings at its level in the table of contents (for example, the word count certificate and the proof of service). You can still mark this text with the appropriate heading level. After you have marked the text, you can change the format by using the formatting menu under the Home tab. Note: If you are marking an entry that is on two lines (such as the word count certificate below) make sure that you have used a soft return (Enter + Shift) to move text to the next line, not a hard return (Enter only). If you use a hard return, the heading will show up as two separate bookmarks. 6
7 Publish to PDF When you have finished marking all the entries that you want to be included as bookmarks in the PDF, and when your brief is in its final form, covert the brief into a PDF using the following instructions. Under the File tab in the top menu, select Save As. 7
8 You should see the following window: 8
9 Under Save as type: select PDF from the dropdown menu. A new Options button should appear. Click on the button. 9
10 The following window should appear: Make sure that the Create bookmarks using: box is checked and that Headings is selected. Click OK. 10
11 You should return to this screen: Click on Save. You should now have a bookmarked PDF. 11
Microsoft Word 2011: Create a Table of Contents
Microsoft Word 2011: Create a Table of Contents Creating a Table of Contents for a document can be updated quickly any time you need to add or remove details for it will update page numbers for you.
Using Microsoft Word to Create Your Theses or Dissertation
Overview Using Microsoft Word to Create Your Theses or Dissertation MsWord s style feature provides you with several options for managing the creation of your theses or dissertation. Using the style
Adobe Acrobat X Pro Creating & Working with PDF Documents
Adobe Acrobat X Pro Creating & Working with PDF Documents Overview Creating PDF documents is useful when you want to maintain the format of your document(s). As a PDF document, your file maintains its
Using Styles in Word to Make Documents Accessible and Formatting Easier
Using Styles in Word to Make Documents Accessible and Formatting Easier This document provides instructions for using styles in Microsoft Word. Styles allow you to easily apply consistent formatting to
Step 2: Headings and Subheadings
Step 2: Headings and Subheadings This PDF explains Step 2 of the step-by-step instructions that will help you correctly format your ETD to meet UCF formatting requirements. Step 2 shows you how to set
MICROSOFT ACCESS 2007 BOOK 2
MICROSOFT ACCESS 2007 BOOK 2 4.1 INTRODUCTION TO ACCESS FIRST ENCOUNTER WITH ACCESS 2007 P 205 Access is activated by means of Start, Programs, Microsoft Access or clicking on the icon. The window opened
Specialized Numbering
Specialized Numbering Specialized numbering is used to assign a chapter-based numbering scheme to subheadings (e.g. 2.1, 2.1.1) as well as figure and table captions (e.g. Figure 3.5) within Microsoft Word.
Microsoft Word 2007 Module 1
Microsoft Word 2007 Module 1 Microsoft Word 2007: Module 1 July, 2007 2007 Hillsborough Community College - Professional Development and Web Services Hillsborough Community College
Microsoft Word For Windows
Microsoft Word For Windows The Word Window The Microsoft Word for Windows screen consists of two main parts, the text area and the elements surrounding the text area. The diagram below shows a typical...
Drip Marketing Campaign Manual
Drip Marketing Campaign Manual Released May 2006 Manual for Drip Marketing Campaign: Getting Started 1. Log into. 2. Hold cursor over the Tools tab. 3. Click on Drip Marketing Campaign.
Microsoft Office Publisher 2010
1 Microsoft Office Publisher 2010 Microsoft Publisher is a desktop publishing application which allows you to create artistic documents as brochures, flyers, and newsletters. To open Microsoft Office Publisher: Build a SharePoint Website
How to Build a SharePoint Website Beginners Guide to SharePoint Overview: 1. Introduction 2. Access your SharePoint Site 3. Edit Your Home Page 4. Working With Text 5. Inserting Pictures 6. Making Tables
ITCS QUICK REFERENCE GUIDE: EXPRESSION WEB SITE
Create a One-Page Website Using Microsoft Expression Web This tutorial uses Microsoft Expression Web 3 Part 1. Create the Site on your computer Create a folder in My Documents to house the Web files. Save
Maximizing the Use of Slide Masters to Make Global Changes in PowerPoint
Maximizing the Use of Slide Masters to Make Global Changes in PowerPoint This document provides instructions for using slide masters in Microsoft PowerPoint. Slide masters allow you to make a change just
Word Lesson 4. Formatting Text. Microsoft Office 2010 Introductory. Pasewark & Pasewark
Formatting Text Microsoft Office 2010 Introductory 1 Objectives Change the font. Change the size, color, and style of text. Use different underline styles and font effects and highlight text. Copy formatting
Process Document Campus Community: Create Communication Template. Document Generation Date 7/8/2009 Last Changed by Status
Document Generation Date 7/8/2009 Last Changed by Status Final System Office Create Communication Template Concept If you frequently send the same Message Center communication to selected students, you
Overview of PDF Bookmarks
Overview of PDF Bookmarks Quick Tips: -PDF Bookmarks: Bookmarks are used in Adobe Acrobat to link a particular page or section of a PDF file. They allow you to quickly jump to that portion of the document
National RTAP Marketing Transit Toolkit Customizing Templates in Microsoft Publisher
National RTAP Marketing Transit Toolkit Customizing Templates in Microsoft Publisher Customizing the Templates in Microsoft Publisher Microsoft Publisher is part of the Microsoft Office Suite, so
Handout: Word 2010 Tips and Shortcuts
Word 2010: Tips and Shortcuts Table of Contents EXPORT A CUSTOMIZED QUICK ACCESS TOOLBAR... 2 IMPORT A CUSTOMIZED QUICK ACCESS TOOLBAR... 2 USE THE FORMAT PAINTER... 3 REPEAT THE LAST ACTION... 3 SHOW
Creating and Using Links and Bookmarks in PDF Documents
Creating and Using Links and Bookmarks in PDF Documents After making a document into a PDF, there may be times when you will need to make links or bookmarks within that PDF to aid navigation through
Creating a Table of Contents in Microsoft Word 2011
1 Creating a Table of Contents in Microsoft Word 2011 Sections and Pagination in Long Documents When creating a long document like a dissertation, which requires specific formatting for pagination, there. Quick Reference Guide. Union Institute & University
Microsoft Word 2010 Quick Reference Guide Union Institute & University Contents Using Word Help (F1)... 4 Window Contents:... 4 File tab... 4 Quick Access Toolbar... 5 Backstage View... 5 The Ribbon...
Microsoft Word: Upgrade Summary Anatomy of Microsoft Word 2007
Microsoft Word: Upgrade Summary Anatomy of Microsoft Word 2007 Office Button Quick Access Toolbar Menu Tabs Dialogue Boxs Menu Groups Page Formats Zoom Starting a Document New Document New Ctrl + N Opening
Editing the Slide Master
Editing the Slide Master Note: this document was designed for Microsoft Office PowerPoint 2003. Other versions may differ and require different procedures. The Slide Master allows you to set the background,
Creating trouble-free numbering in Microsoft Word
Creating trouble-free numbering in Microsoft Word This note shows you how to create trouble-free chapter, section and paragraph numbering, as well as bulleted and numbered lists that look the way you want
Creating Forms with Acrobat 10
Creating Forms with Acrobat 10 Copyright 2013, Software Application Training, West Chester University. A member of the Pennsylvania State Systems of Higher Education. No portion of this document may be
PowerPoint: Masters & Multimedia Quick Reference
PowerPoint: Masters & Multimedia Quick Reference Create and customize one or more slide masters For each slide master that you want to create, do the following: 1. Open a blank presentation. 2. On the
Accessing your Professional Development Plan (PDP) Evaluation Process Professional Development Plan Start New Start Edit
1 NC Educator Effectiveness System Teacher s Guide Professional Development Plan (PDP) This guide outlines the steps that Teachers must complete for the Professional Development Plan in the North Carolina
Student Manager s Guide to the Talent Management System
Department of Human Resources 50 Student Manager s Guide to the Talent Management System 1 Table of Contents Topic Page SYSTEM INTRODUCTION... 3 GETTING STARTED... 4 NAVIGATION WITHIN THE TALENT MANAGEMENT
Microsoft Migrating to Word 2010 from Word 2003
In This Guide Microsoft Word 2010 looks very different, so we created this guide to help you minimize the learning curve. Read on to learn key parts of the new interface, discover free Word 2010 training,
3. We will work with the Page Content Web Part, so single click Edit Content
Using SharePoint to Create Web Pages Signing In 1. Open Internet Explorer 2. Type in the school URL: or teacher sub-site URL
Lesson 5 Inserting Hyperlinks & Action Buttons
Lesson 5 Inserting Hyperlinks & Action Buttons Introduction A hyperlink is a graphic or piece of text that links to another web page, document, or slide. By clicking on the hyperlink will activate it and
Creating a table of contents quickly in Word
Creating a table of contents quickly in Word This note shows you how to set up a table of contents that can be generated and updated quickly and easily, even for the longest and most complex documents.
Website Builder Overview
Website Builder Overview The Website Builder tool gives users the ability to create and manage their own website, which can be used to communicate with students and parents outside of the classroom. WORD ACCESSIBILITY TIPS
MICROSOFT WORD ACCESSIBILITY TIPS 1. When inserting images or charts, be sure to add ALT tags or a description of the image for screen readers. 2. Ensure that all documents include a document title and
Add a custom a color scheme
The Page Design Ribbon About color schemes and font schemes Color schemes are sets of colors designed to look complement one another. Similarly, font schemes are sets of complementary fonts that are
Implementing Mission Control in Microsoft Outlook 2010
Implementing Mission Control in Microsoft Outlook 2010 How to Setup the Calendar of Occasions, Not Doing Now List, Never Doing Now List, Agendas and the Vivid Display In Outlook 2010 Handout Version 3
Microsoft PowerPoint 2010 Computer Jeopardy Tutorial
Microsoft PowerPoint 2010 Computer Jeopardy Tutorial 1. Open up Microsoft PowerPoint 2010. 2. Before you begin, save your file to your H drive. Click File > Save As. Under the header that says Organize
Organizing and Managing Email
Organizing and Managing Email Outlook provides several tools for managing email, including folders, rules, and categories. You can use these tools to help organize your email. Using folders Folders can
HOUR 9. Formatting Worksheets to Look Great
HOUR 9 Formatting Worksheets to Look Great Excel makes it easy to make even simple worksheets look professional. AutoFormat quickly formats your worksheet within the boundaries you select. If you want
Acrobat PDF Forms - Part 2
Acrobat PDF Forms - Part 2 PDF Form Fields In this lesson, you will be given a file named Information Request Form that can be used in either Word 2003 or Word 2007. This lesson will guide you through
Why Use OneNote? Creating a New Notebook
Why Use OneNote? OneNote is the ultimate virtual notebook that enables users to create notes in various formats, shares those notes, sync those notes with the cloud and collaborate with others. You templates and slide masters in PowerPoint 2003
Creating templates and slide masters in PowerPoint 2003 These days, it is not enough to create a presentation with interesting and exciting content; you have to create one with interesting and exciting
There are several ways of creating a PDF file using PDFCreator.
it Information Information Technology Services Introduction Using you can convert virtually any file from any application into Adobe Portable Document Format (PDF). Documents in Adobe PDF preserve the
Word 2010: Material adapted from Microsoft Word Help
IT Training and Communication A Division of Information Technology Technology-related learning opportunities and support for VSU Faculty and Staff Word 2010: Material adapted from Microsoft Word Help Table
Adobe Acrobat Electronic Signatures
Adobe Acrobat Electronic Signatures Creating a custom signature stamp 1. Sign your name on a piece of paper (a marker style pen works well) 2. Scan the paper 3. Save to the desktop (or anywhere you like)
Use e-mail signatures in Outlook 2010
Use e-mail signatures in Outlook 2010 Quick Reference Card Download and use a signature template Note This procedure will take you away from this page. If necessary, print this page before you follow these
Using Microsoft Word 2011 to Create a Legal Research Paper
Using Microsoft Word 2011 to Create a Legal Research Paper CHANGING YOUR DEFAULT FONT... 2 LISTS, INDENTATIONS, TABS AND THE RULER... 3 CREATING A BULLETED OR NUMBERED LIST... 3 INDENTING PARAGRAPHS...
Managing Contacts in Outlook
Managing Contacts in Outlook This document provides instructions for creating contacts and distribution lists in Microsoft Outlook 2007. In addition, instructions for using contacts in a Microsoft Word
Creating tables of contents and figures in Word 2013
Creating tables of contents and figures in Word 2013 Information Services Creating tables of contents and figures in Word 2013 This note shows you how to create a table of contents or a table of figures
HOW TO MAKE A TABLE OF CONTENTS
HOW TO MAKE A TABLE OF CONTENTS WHY THIS IS IMPORTANT: MS Word can make a Table of Contents automatically by using heading styles while you are writing your document; however, these instructions will focus
Creating Electronic Portfolios using Microsoft Word and Excel
Step-by-Step Creating Electronic Portfolios using Microsoft Word and Excel The Reflective Portfolio document will include the following: A Cover Page for the portfolio - Include a Picture or graphic & Its New Interface
LESSON 1 MS Word 2007 Introduction to Microsoft Word & Its New Interface What is Microsoft Office Word 2007? Microsoft Office Word 2007 is the twelfth version of Microsoft s powerful word processing program.
MICROSOFT WORD 2011 DESIGN DOCUMENT BUILDING BLOCKS
MICROSOFT WORD 2011 DESIGN DOCUMENT BUILDING BLOCKS Last edited: 2012-07-09 1 Understand Building Blocks... 4 Insert Headers and Footers... 5 Insert a simple page number... 5 Return to body of the document...
BU Digital Print Service. High Resolution PDFs
BU Digital Print Service High Resolution PDFs Introduction As part of the BU Digital Print service files can be uploaded to the Web to Print (W2P) portal for printing however the quality of the print
2015 Word 2 Page 1. Microsoft Word Word 2
Word 2 Microsoft Word 2013 Mercer County Library System Brian M. Hughes, County Executive Action Technique 1. Page Margins On the Page Layout tab, in the Page Setup group, click Margins. Click the margin
Excel Reports User Guide
Excel Reports User Guide Copyright 2000-2006, E-Z Data, Inc. All Rights Reserved. No part of this documentation may be copied, reproduced, or translated in any form without the prior written consent of
Lesson 4: Formatting Paragraphs and Working with Styles
Lesson 4: Formatting Paragraphs and Working with Styles When you type information into Microsoft Word, each time you press the Enter key Word creates a new paragraph. You can format paragraphs. For example,
Word Processing 1 WORD PROCESSING 1. Using a computer for writing
Word Processing 1 WORD PROCESSING 1 Using a computer for writing Microsoft Office 2010 Microsoft Word 2010 I Contents: When/if things go wrong...5 Help...5 Exploring the Word Screen...6 Starting Word &
-SoftChalk LessonBuilder-
-SoftChalk LessonBuilder- SoftChalk is a powerful web lesson editor that lets you easily create engaging, interactive web lessons for your e-learning classroom. It allows you to create and edit content
FrontPage 2003: Forms
FrontPage 2003: Forms Using the Form Page Wizard Open up your website. Use File>New Page and choose More Page Templates. In Page Templates>General, choose Front Page Wizard. Click OK. It is helpful if
Microsoft Word 2013 Basics
Microsoft Word 2013 Basics 1. From Start, look for the Word tile and click it. 2. The Ribbon- seen across the top of Microsoft Word. The ribbon contains Tabs, Groups, and Commands a. Tabs sit across the 2007: Animation Learning Guide
PowerPoint 2007: Animation Learning Guide What kinds of animations can I use? PowerPoint offers two different kinds of animations: Text and object animations control the way in which content appears on
Google Sites. How to create a site using Google Sites
Contents How to create a site using Google Sites... 2 Creating a Google Site... 2 Choose a Template... 2 Name Your Site... 3 Choose A Theme... 3 Add Site Categories and Descriptions... 3 Launch Your Google...
Advanced Presentation Features and Animation
There are three features that you should remember as you work within PowerPoint 2007: the Microsoft Office Button, the Quick Access Toolbar, and the Ribbon. The function of these features will be more
Adobe Acrobat: Creating Interactive Forms
Adobe Acrobat: Creating Interactive Forms This document provides information regarding creating interactive forms in Adobe Acrobat. Please note that creating forms requires the professional version (not Word 2011 Basics for Mac
1 Microsoft Word 2011 Basics for Mac Word 2011 Basics for Mac Training Objective To introduce the new features of Microsoft Word 2011. To learn the tools and features to get started using Word 2011 more | http://docplayer.net/26337356-Instructions-for-adding-pdf-bookmarks-using-word.html | CC-MAIN-2018-47 | en | refinedweb |
This is the mail archive of the newlib@sourceware.org mailing list for the newlib project.
Hello,I have some questions about the divergence from BSD sources. The background is that <sys/queue.h> is broken in Newlib since __offsetof() is undefined. I wanted to update the file in Newlib with the latest version from FreeBSD, but now some problems arise.":
/* Macro to test version of GCC. Returns 0 for non-GCC or too old GCC. */ #ifndef __GNUC_PREREQ # if defined __GNUC__ && defined __GNUC_MINOR__ # define __GNUC_PREREQ(maj, min) \ ((__GNUC__ << 16) + __GNUC_MINOR__ >= ((maj) << 16) + (min)) # else # define __GNUC_PREREQ(maj, min) 0 # endif #endif /* __GNUC_PREREQ */ In FreeBSD we have a similar macro in <sys/cdefs.h>: /* * Macro to test if we're using a specific version of gcc or later. */ #if defined(__GNUC__) && !defined(__INTEL_COMPILER) #define __GNUC_PREREQ__(ma, mi) \ (__GNUC__ > (ma) || __GNUC__ == (ma) && __GNUC_MINOR__ >= (mi)) #else #define __GNUC_PREREQ__(ma, mi) 0 #endif Why don't we follow the BSD development more to simplify code re-use? The latest version of FreeBSD <sys/queue.h> uses __containerof(): It is defined in <sys/cdefs.h>: /* * Given the pointer x to the member m of the struct s, return * a pointer to the containing structure. When using GCC, we first * assign pointer x to a local variable, to check that its type is * compatible with member m. */ #if __GNUC_PREREQ__(3, 1) #define __containerof(x, s, m) ({ \ const volatile __typeof(((s *)0)->m) *__x = (x); \ __DEQUALIFY(s *, (const volatile char *)__x - __offsetof(s, m));\ }) #else #define __containerof(x, s, m) \ __DEQUALIFY(s *, (const volatile char *)(x) - __offsetof(s, m)) #endif Any objections to: 1. Remove __GNUC_PREREQ in <sys/features.h>. 2. Add __GNUC_PREREQ__ in <sys/cdefs.h> like in FreeBSD. 3. Add __containerof() like in FreeBSD. 4. Update <sys/queue.h> with latest FreeBSD version. --. | https://sourceware.org/ml/newlib/2013/msg00192.html | CC-MAIN-2019-30 | en | refinedweb |
Dot go to Azure Active Directory and click on App Registrations and finally on the "New application registration" button.
Now we need to provide some info.
I'll call the web app "DevProtocol.Giraffe.AuthDemo.Web". To make the debugging process easier, I'll put a localhost address in the sign-on URL. Dotnet core is using the signin-oidc path as default to handle the flow.
Take a note of the ApplicationID as we'll need it later on.
On the settings page you can go to keys to create a new secret key.
If you only need to authorize your user for your web app (without any api involved), you don't need to have a key.
In the Reply URLs section, you can verify that our signin-oidc path is added to the list of return URLs.
We're all set to start with our Giraffe web app.
Create a new giraffe web app.
dotnet new giraffe -V razor -lang F# --name DevProtocol.Giraffe.AuthDemo.Web cd src dotnet new sln --name DevProtocol.Giraffe.AuthDemo.Web dotnet sln add DevProtocol.Giraffe.AuthDemo.Web/DevProtocol.Giraffe.AuthDemo.Web.fsproj
Configure giraffe to use oidc
You'll need to add some NuGet packages:
<PackageReference Include="Microsoft.AspNetCore.Authentication.OpenIdConnect" Version="2.1.1" /> <PackageReference Include="Microsoft.Extensions.Configuration" Version="2.1.1" /> <PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="2.1.1" /> <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="2.1.1" />
We'll store our oidc settings in the appsettings.json file. A full description can be read in a previous post:.
Add a configureAppConfiguration method to tell Giraffe that you'll be using appsettings files:
let configureAppConfiguration (context:WebHostBuilderContext) (config: IConfigurationBuilder) = config .AddJsonFile("appsettings.json",false,true) .AddJsonFile(sprintf "appsettings.%s.json" context.HostingEnvironment.EnvironmentName,true) .AddEnvironmentVariables() |> ignore
Add it to your WebHostBUilder in the main function:
WebHostBuilder() .UseKestrel() .UseContentRoot(contentRoot) .UseIISIntegration() .UseWebRoot(webRoot) .ConfigureAppConfiguration(configureAppConfiguration) // Rest of the code
Now we need to add authentication to the services of our web app. First open the namespaces we'll need:
open Microsoft.AspNetCore.Authentication open Microsoft.AspNetCore.Authentication.Cookies open Microsoft.AspNetCore.Authentication.OpenIdConnect
In our configureServices method we need to grab our configuration in order to read the oidc settings from our appsettings.json file:
let configureServices (services : IServiceCollection) = let sp = services.BuildServiceProvider() let env = sp.GetService<IHostingEnvironment>() let config = sp.GetService<IConfiguration>()
Next we need to add authentication to the services:
services.AddAuthentication( Action<AuthenticationOptions>(fun auth -> auth.DefaultAuthenticateScheme <- CookieAuthenticationDefaults.AuthenticationScheme auth.DefaultChallengeScheme <- OpenIdConnectDefaults.AuthenticationScheme auth.DefaultSignInScheme <- CookieAuthenticationDefaults.AuthenticationScheme ) ) .AddCookie() .AddOpenIdConnect( Action<OpenIdConnectOptions>(fun oid -> config.GetSection("OpenIdConnect").Bind(oid) )) |> ignore
Note that we bind our OpenIdConnect settings of our appsettings.json file to the OpenIdConnectOptions. The "OpenIdConnect" string must match the json setting that we'll setup in the next section.
Next we'll need to plug in our authentication middleware into the request pipeline. It's as simple as adding the UseAuthentication before UseGiraffe in the configureApp method:
let configureApp (app : IApplicationBuilder) = let env = app.ApplicationServices.GetService<IHostingEnvironment>() (match env.IsDevelopment() with | true -> app.UseDeveloperExceptionPage() | false -> app.UseGiraffeErrorHandler errorHandler) .UseCors(configureCors) .UseStaticFiles() .UseAuthentication() .UseGiraffe(routes)
The OpenIdConnect settings
Now add an appsettings.json file to your app and make sure it's copied to the output directory.
Your OpenIdConnect setting can be configured as follows:
"OpenIdConnect": { "ClientId": "<YOUR-APPLICATION-ID-FROM-AZURE-AD>", "Authority": "<YOUR-AZURE-TENANT-ID>/", "CallbackPath": "/signin-oidc", "ResponseType": "id_token"
- ClienId: this corresponds with the ApplicationID you received when you registered your app in AzureAD
- Authority: Tells our app who's responsible to handle the authorization.
- CallbackPath: This is the path you configured in the returnURL of your app registration. The oidc module will prefix the path with the url your app is running on (ex.)
- ResponseType: For more info about the ResponseType can be found on. In our case an id_token is enough as we don't need an access token towards an API.
Secure an endpoint
The last thing we need to do is tell Giraffe which endpoint needs to be secured. Create a new method authorize that will challenge our OpenIdConnect configuration:
let authorize = requiresAuthentication(challenge OpenIdConnectDefaults.AuthenticationScheme)
Now you can use the authorize function in your routes:
let routes: HttpFunc -> HttpFunc = choose [ GET >=> choose [ route "/" >=> indexHandler "test" route "/secure" >=> authorize >=> handleGetSecure ] setStatusCode 404 >=> text "Not Found"
Conclusion
If you understand how Azure AD and openid connect is working in a C# dotnet core application, it isn't that difficult to map everything into a Giraffe web app.
Download
Download the code from my github | https://www.devprotocol.com/integrate-azure-ad-in-your-giraffe-web-app/ | CC-MAIN-2019-30 | en | refinedweb |
pthread_mutexattr_setprioceiling(3THR) sets the priority ceiling attribute of a mutex attribute object.
#include <pthread.h> int pthread_mutexattr_setprioceiling(pthread_mutexatt_t *attr, int prioceiling, int *oldceiling);
attr points to a mutex attribute object created by an earlier call to pthread_mutexattr_init().
prioceiling specifies the priority ceiling of initialized mutexes. The ceiling defines the minimum priority level at which the critical section guarded by the mutex is executed. prioceiling will be within the maximum range of priorities defined by SCHED_FIFO. To avoid priority inversion, prioceiling will be set to a priority higher than or equal to the highest priority of all the threads that might lock the particular mutex.
oldceiling contains the old priority ceiling value.
On successful completion, pthread_mutexattr_setprioceiling() returns 0. Any other return value indicates that an error occurred.
If any of the following conditions occurs, pthread_mutexattr_setprioceiling() fails and returns the corresponding value.
The option _POSIX_THREAD_PRIO_PROTECT is not defined and the implementation does not support the function.
If either of the following conditions occurs, pthread_mutexattr_setprioceiling() might fail and return the corresponding value.
The value specified by attr or prioceiling is invalid.
The caller does not have the privilege to perform the operation. | https://docs.oracle.com/cd/E19683-01/806-6867/sync-95/index.html | CC-MAIN-2019-30 | en | refinedweb |
QTreeView initiates editing due to other triggers than the one configured, why?
I'm having the problem that the "edit": method of QTreeView is getting called due to other triggers than the one I have enabled, which is "SelectedClicked":. Why does this happen?
Below is an example of what I'm doling, if you try it you should see that edit gets called due to different triggers:
@
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
class TreeView(QTreeView):
def init(self):
super().init()
self.setEditTriggers(self.SelectedClicked)
self.__model = QStandardItemModel() self.__model.appendRow([QStandardItem('Item 1')]) self.__model.appendRow([QStandardItem('Item 2')]) self.setModel(self.__model) def edit(self, index, trigger, event): print('Edit index {},{}, trigger: {}'.format(index.row(), index.column(), trigger)) return False
app = QApplication([])
w = TreeView()
w.show()
app.exec_()
@
See also "my SO question": on this topic.
Nevermind, I got an elucidating "answer":. | https://forum.qt.io/topic/48584/qtreeview-initiates-editing-due-to-other-triggers-than-the-one-configured-why | CC-MAIN-2019-30 | en | refinedweb |
# Python3 program to calculate the
# sum of nodes at the maximum depth
# of a binary tree
# Helper function that allocates a
# new node with the given data and
# None left and right poers.
class newNode:
# Constructor to create a new node
def __init__(self, data):
self.data = data
self.left = None
self.right = None
# Function to return the sum
def SumAtMaxLevel(root):
# Map to store level wise sum.
mp = {}
# Queue for performing Level Order
# Traversal. First entry is the node
# and second entry is the level of
# this node.
q = []
# Root has level 0.
q.append([root, 0])
while (len(q)):
# Get the node from front
# of Queue.
temp = q[0]
q.pop(0)
# Get the depth of current node.
depth = temp[1]
# Add the value of this node in map.
if depth not in mp:
mp[depth] = 0
mp[depth] += (temp[0]).data
# append children of this node,
# with increasing the depth.
if (temp[0].left) :
q.append([temp[0].left,
depth + 1])
if (temp[0].right) :
q.append([temp[0].right,
depth + 1])
# return the max Depth sum.
return list(mp.values())[-1]
# Driver Code
if __name__ == ‘__main__’:
# Let us construct the Tree
# shown in the above figure
root = newNode(1)
root.left = newNode(2)
root.right = newNode(3)
root.left.left = newNode(4)
root.left.right = newNode(5)
root.right.left = newNode(6)
root.right.right = newNode(7)
print(SumAtMaxLevel(root))
# This code is contributed by
# Shubham Singh(SHUBHAMSINGH10)
22
- Largest value in each level of Binary Tree | Set-2 (Iterative Approach)
- Check for Symmetric Binary Tree (Iterative Approach)
- Iterative approach to check if a Binary Tree is Perfect
- Get level of a node in binary tree | iterative approach
- Iterative approach to check for children sum property in a Binary Tree
- Deepest right leaf node in a binary tree | Iterative approach
- Deepest left leaf node in a binary tree | iterative approach
- Construct Binary Tree from given Parent Array representation | Iterative Approach
- Count full nodes in a Binary tree (Iterative and Recursive)
- Count half nodes in a Binary tree (Iterative and Recursive)
- Iterative program to count leaf | https://www.geeksforgeeks.org/sum-of-nodes-at-maximum-depth-of-a-binary-tree-iterative-approach/ | CC-MAIN-2019-30 | en | refinedweb |
Music as Discourse
Semiotic Adventures in Romantic Music
KO F I AG AW U
1
2009
1
Oxford University Press, Inc., publishes works that further
Oxford Universitys
I. Title.
P R E FAC E
Acknowledgments
I am grateful to Dniel Pter Bir for comments on a draft of this book, Christopher
Matthay for corrections and numerous helpful suggestions, Guillermo Brachetta
for preparing the music examples, and Suzanne Ryan for advice on content and
organization. It goes without saying that I alone am responsible for what is printed
here.
Some of this material has been seen in other contexts. The comparison between
music and language in chapter 1 is drawn from my article The Challenge of Semiotics, which is included in the collection Rethinking Music, edited by Nicholas Cook
and Mark Everist (Oxford University Press, 1999). The analysis of Beethovens op.
18, no. 3, in chapter 5 appears in expanded form in Communication in EighteenthCentury Music, edited by Danuta Mirka and myself (Cambridge University Press,
2008). The analysis of the slow movement of Mozarts Piano Sonata in A Minor, K.
310, originated in a paper presented to a plenary session at the Society for Music
Theorys 2006 annual meeting in Los Angeles. And some of the material in chapter 4, Bridges to Free Composition, began life as a keynote address delivered to
the International Conference on 19th-Century Music in Manchester, England, in
2005. Im grateful for this opportunity to recontextualize these writings and talks.
vi
Preface
antinarrative tendency in Stravinsky. For such exercises, you will find several leads
in this book. In place of models to be emulated directly or mechanically, I offer
suggestions and (deliberately) partial readings designed to stimulate your own
fantasies.
Reading a book on music analysis is not exactly like reading a novel. You will
need access to several scores and parallel recordings and a keyboard to try out
certain imagined patterns. For the shorter analyses in part I (chapters 3 and 5),
you will need, among others, scores of Schuberts Im Dorfe from Die Winterreise,
Schumanns Ich grolle nicht from Dichterliebe, the C-major Prelude from Book 1
of J. S. Bachs The Well-Tempered Clavier, the slow movement of Mozarts Piano
Sonata in A minor K. 310, Brahms song Die Mainacht, and the first movement
of Beethovens String Quartet in D major, op. 18, no. 3. For the longer analyses in
part II (chapters 6 through 9), you will need scores of Liszts Orpheus; Brahms
First Symphony (second movement); his Intermezzo in E Minor, op. 119, no. 2;
Mahlers Ninth Symphony (first movement); Beethovens String Quartet op. 130
(first movement); and Stravinskys Symphonies of Wind Instruments (1920, though
my analysis will use the 1947 version). Because the analyses constantly refer to
places in the score, it would be almost pointless to attempt to read these chapters
without the relevant scores at hand.
CONTENTS
Introduction
PART I
Theory
1.
2.
3.
4.
5.
Music as Language
Criteria for Analysis I
Criteria for Analysis II
Bridges to Free Composition
Paradigmatic Analysis
15
41
75
109
163
PART II
Analyses
6. Liszt, Orpheus (18531854)
7. Brahms, Intermezzo in E Minor, op. 119, no. 2 (1893),
and Symphony no. 1/ii (18721876)
8. Mahler, Symphony no. 9/i (19081909)
9. Beethoven, String Quartet, op. 130/i (18251826), and
Stravinsky, Symphonies of Wind Instruments (1920)
Epilogue
Bibliography
Index
211
229
253
281
317
321
331
Music as Discourse
Introduction
This book is an exercise in music analysis. I explore the nature of musical meaning from within the disciplinary perspective of music theory and propose a view
of music as discourse. I do not claim to offer a new theory of meaning; rather,
drawing on a handful of existing analytical theories and adding some insights of
my own, I seek to illuminate core aspects of a small group of well-known compositions chosen from the vast repertoires produced in nineteenth- and early twentieth-century Western Europe. Beethoven, Schubert, Mendelssohn, Schumann,
Liszt, Brahms, Mahler, Strauss, Bartk, and Stravinskyhousehold names for aficionados of European classical musicare among the composers whose works
I discuss; theoretically, I draw on Schenker, Ratner, Adorno, and the general field
of musical semiotics. My perspective is resolutely that of the listener, not a mere
listener but one for whom acts of composing and performing, be they real or imagined, necessarily inform engaged listening. Id like to think that my ultimate commitments are to the compositions themselves rather than to theories about them,
but the distinction is fragile and should not be made dogmatically or piously. The
book is aimed at those who are fascinated by the inner workings of music and
enjoy taking individual compositions apart and speculating on how (or, occasionally, whether) they cohere.1
Does music have meaning? This question has been debated ever since music
became a subject of discourse. Aestheticians, philosophers, historians, semioticians, and sociologists of music have had their say; in our own day, musicologists of a certain persuasion have been exercised by it. Some think that musical
meaning is intrinsic while others argue for extrinsic meanings. Some believe that
music is autonomous or relatively autonomous while others insist on permanent
1. In the terms of an ancient classification scheme, the discipline represented in this book is musica
theorica (theoretical speculation) as distinct from musica poetica (composition) or musica practica
(performance). These spheres are only notionally distinct, however; in analytical practice, they overlap and inform each other in significant and productive ways. See Manfred Bukofzer, Music in the
Baroque Era, from Monteverdi to Bach (New York: Norton, 1947), 370371.
Music as Discourse
social or historical traces on all musical products. Some are surprised to find that
the associations they prefer to make while listening to masterworksassociations
that seem self-evident to themare not necessarily shared by others: heroism or
aggressiveness in Beethoven, domination in Wagner, ambivalence in Tchaikovsky,
or femininity and worldliness in Schubert. No doubt, these debates will continue
into the future. And this is as it should be, for as long as music is made, as long as
it retains its essence as a performed art, its significance is unlikely to ever crystallize into a stable set of meanings that can be frozen, packaged, and preserved for
later generations. Indeed, it would be a profoundly sad occasion if our ideologies
became aligned in such a way that they produced identical narratives about musical works. One mark of the endurance of strong works of art is that they make
possible a diversity of responsesa diversity regulated by a few shared (social or
institutional) values.
Meanings are contingent. They emerge at the site of performance and are constituted critically by historically informed individuals in specific cultural situations.
Basic questions about musics ontology have never received definitive answers.
What music is, what and how it means, what meaning is, and why we are interested
in musical meaning in the first place: these questions are not meant to be answered
definitively nor with a commanding transhistorical attribution but posed periodically to keep us alert and honest. Then there are those considerations that arise
when history, culture, and convention inflect the search for meaningwhether
a work embodies a late style, conveys subjectivity, or reproduces the dynamics of
the society in which the composer lived. While interpretation can be framed dialogically to ensure that original meanings and subsequent accretions are neither
ignored nor left uninterrogated, the final authority for any interpretation rests on
present understanding. Todays listener rules.
The issues raised by musical meaning are complex and best approached
within carefully circumscribed contexts. For although no one doubts that music
making is or can be meaningful and satisfying, or that the resultant processes and
products have significance for those involved, be they composers, performers, or
listeners, the nonverbal essence of music has proved resistant to facile domestication within a verbal economy. My own curiosity about the subject stems in
part from an early interest in the confluence of composition, performance, and
analysis and from a sociological circumstance: but for a handful of exceptions,
card-carrying music theorists have been generally reticent about confronting
the subject of musical meaning.2 This does not mean that ideas of meaning do
not surface in their work from time to time, nor that, in producing voice-leading
graphs, metric reductions, paradigmatic charts, set-class taxonomies, and Tonnetz
2. Significant exceptions include Nicholas Cook, Analysing Musical Multimedia (Oxford: Oxford University Press, 2001); Cook, Review Essay: Putting the Meaning Back into Music; or, Semiotics Revisited, Music Theory Spectrum 18 (1996): 106123; Robert Hatten, Musical Meaning in Beethoven:
Markedness, Correlation, and Interpretation (Bloomington: Indiana University Press, 1994); Michael
L. Klein, Intertextuality in Western Art Music (Bloomington: Indiana University Press, 2005); and
the collection of essays in Approaches to Meaning in Music, ed. Byron Almen and Edward Pearsall
(Bloomington: Indiana University Press, 2006).
Introduction
5
Music as Discourse
its starting point rather than external factors.3 The operative phrase here is not
the polemical the music itself but starting point. To proclaim this other contingency is to promote an open-ended view of analysis; it is to encourage rigorous
speculation about musical meaning that takes certain core features as its point of
departure and that terminates on a path toward an enriched perspective.4
However elusive they may be, musics meanings are unlikely to be accessible to
those who refuse to engage with the musical code or those who deal with only the
most general and superficial aspects of that code. Defining a musical code comprehensively presents its own challenges, but there is by now a body of systematic and
historical data to facilitate such a task. The theories of Schenker, for exampleto
choose one body of texts that is indispensable for work on tonal musicmake
possible a range of explorations of everything from the imaginative expansion of
simple counterpoint in works by Bach, Beethoven, and Schubert through aesthetic
speculation about value and ideology in composers of different national provenance to the place of diminution in twentieth-century composers like Richard
Strauss, Mahler, and Stravinsky. Adhering to the code would ensure, for example,
that harmony is not ignored in the construction of a theory of meaning for Chopin
and Wagner, that melody is given due attention in Mendelssohn analysis, and that
the rhythmic narratives of Brahms or Stravinsky are duly acknowledged. And in
examining these aspects of the code, one is wise to seek the sharpest, most sensitive, and most sophisticated tools. Analysis may bear a complex relationship to
technology, but to ignore technological advancement in analytic representation is
to subscribe to a form of irrationality, even a form of mysticism, perhaps.
The defensive and at the same time aggressive tone in that last remark is forced
upon me by a strange circumstance. Despite the fact that music theory is one of the
oldest of humanistic disciplines, and despite the enviably precise vocabulary with
which the elements of music have been described and categorized, their essences
captured, recent attacks on formalism have sought to belittle this commanding
legacy with the claim that theorists do not deal with musics meaning and significance. This extraordinary claim is surprisingly deaf to the numerous meanings
enshrined in theorys technical processes and language. True, the meanings of a
motivic parallelism, or a middleground arpeggiation, or a modulation to a tritonerelated key are not always discussed in affective or expressive terms, but this does
not mean that the economy of relations from which they stem is pure, or based
solely on abstract, logical relations, or lacking semantic or affective residue. Since
the acquisition of such technical language still provides the basic elements of literacy for todays academic musician, there is work to be done in making explicit
what has remained implicit and in seeking to extend theorys domain without
undermining its commitment to the music itself. In short, it is not clear how a
theory of musical meaning that engages the musical language can be anything
other than a study of the musical code.
3. Ian Bent and Anthony Pople, Analysis, The New Grove Dictionary of Music and Musicians, 2nd ed.,
ed. Stanley Sadie (London: Macmillan, 2001).
4. For an elaboration of this view of analysis, see Agawu, How We Got Out of Analysis and How to Get
Back In Again, Music Analysis 23 (2004): 267286.
Introduction
7
Music as Discourse
sonata, but a more elusive quality in which the elements of a pieceevents and
periodsare heard working together and evincing a distinctive profile.
Theories of form abound. From the compositional or pedagogical theories of
Koch, Czerny, and Schoenberg through the aesthetic and analytical approaches
of Schenker, Tovey, Caplin, Rothstein, and Hepokoski and Darcy, scholars have
sought to distribute the reality of musical compositions into various categories,
explaining adherence as well as anomaly in reference to constructed norms, some
based on the symbolic value of an individual work (like the Eroica Symphony),7
others based on statistics of usage. Yet one cannot help feeling that every time
we consign a work to a category such as sonata form, we lie. For reasons that
will emerge in the course of this book, I will pay less attention to such standard
forms than to the events or groups of events that comprise them. Is there a difference between the two approaches? Yes, and a significant one at that. Placing
the emphasis on events promotes a processual and phenomenological view of the
work; it recognizes moment-by-moment succession but worries not at all about an
overall or resultant profile that can be named and held up as an archetype. Without denying the historical significance of archetypes or outer forms (such as ABA'
schemata) or their practical value for teachers of courses in music appreciation,
I will argue that, from a listeners point of view, such forms are often overdetermined, inscribed too rigidly; as such, they often block access to the rich experience
of musical meaning. The complex and often contradictory tendencies of musical
materials are undervalued when we consign them to boxes marked first theme,
second theme, and recapitulation. The ability to distribute the elements of a
Brahms symphony into sonata form categories is an ability of doubtful utility or
relevance, and it is a profound shame that musicology has devoted pages upon
pages to erecting these schemes as important mediators of musical meaning. At
best, they possess low-level value; at worst, they are distractions.8
Discourse thus embraces events ordered in a coherent fashion, which may operate in turn in a larger-than-the-sentence domain. There is a third sense of the term
that has emerged in recent years, inspired in part by poststructuralist thinking.
This is discourse as disciplinary talk, including the philosophical and linguistic
props that enable the very formulations we make about our objects of study. Discourse about music in this sense encompasses the things one says about a specific
composition, as Nattiezs Music and Discourse makes clear.9 In this third sense, discourse entails acts of metacriticism. The musical composition comments on itself
at the same time that it is being constituted in the discourse of the analyst. Both
the works internal commentary (represented as acts of inscription attributable
7. See Scott Burnham, Beethoven Hero (Princeton, NJ: Princeton University Press, 1995).
8. According to Janklvitch, sonata form is a something conceived, and not at all something heard,
not time subjectively experienced. Music and the Ineffable, trans. Carolyn Abbate (Princeton, NJ:
Princeton University Press, 2003), 17. Julian Horton notes that any study defining sonata practice in
relation to a norm must confront the problem that there is no single work which could necessarily
be described as normative. The idea exists only as an abstraction. Review of Bruckner Studies, ed.
Paul Hawkshaw and Timothy L. Jackson, Music Analysis 18 (1999): 161.
9. Jean-Jacques Nattiez, Music and Discourse: Toward a Semiology of Music, trans. Carolyn Abbate
(Princeton, NJ: Princeton University Press, 1990).
Introduction
9
to the composer) and the analysts external commentary feed into an analysis of
discourse. The internal commentary is built on observations about processes of
repetition and variation, while the external confronts the very props of insight formation. One sign of this metacritical awareness should be evident in the refusal to
take for granted any of the enabling constructs of our analyses. To analyze in this
sense is necessarily to reflect simultaneously upon the process of analysis.10
I use the phrase semiotic adventures in the subtitle in order to signal this
books conceptual debt to certain basic concepts borrowed from musical semiotics. Semiotics is a plural and irreducibly interdisciplinary field, and it provides, in
my view, the most felicitous framework (among contemporary competing analytical frameworks) for rendering music as structure and style. Writings by Nattiez,
Dougherty, Tarasti, Lidov, Hatten, Dunsby, Grabcz, Spitzer, Monelle, and others
exemplify what is possible without limiting the domain of the possible. I should
add quickly, however, that my aim is not to address an interdisciplinary readership but to speak within the narrower discourses of music analysis, discourses that
emanate from the normal, everyday activities of teachers and students of undergraduate courses in music theory and analysis.
What, then, are the central concerns of the books nine chapters? I have arranged
the material in two broad parts. The first, dubbed Theory, seeks to orient the
reader to perspectives that might facilitate an appreciation of the practical nature
of music as discourse. The second, Analyses, presents case studies encompassing
works by composers from Beethoven to Stravinsky. Specifically, chapter 1, Music
as Language, places music directly next to language in order to point to similarities and differences in sociology, ontology, psychology, structure, reception, and
metalanguage. The metaphor of music as language is oldvery oldand although
its meaning has changed from age to age, and despite the many limitations that
have been pointed out by numerous writers, it remains, in my view, a useful foil
for music analysis. Indeed, a greater awareness of musics linguistic nature may
improve some of the technical analyses that music theorists offer. The aim here,
then, is to revisit an old and familiar issue and offer a series of generalized claims
that might stimulate classroom discussion of the nature of musical language.
Chapters 2 and 3, Criteria for Analysis I and II, also retain a broad perspective in isolating certain key features of Romantic music, but instead of addressing the system of music as such, I turn to selected compositional features that
embrace elements of style as well as structure. Few would disagree with Charles
Rosen that it is disquieting when an analysis, no matter how cogent, minimizes
the most salient features of a work. This is a failure of critical decorum.11 These two
chapters (which belong together as a pair but are separated for practical reasons)
are precisely an attempt not to minimize such features. But salience is a contested
category. Salience is not given, not naturally occurring; it is constructed. Which is
10. For other invocations of music as discourse, see David Lidov, Is Language a Music? Writings on Musical Form and Signification (Bloomington: Indiana University Press, 2005), 1012, 7077, 138144;
and Michael Spitzer, Metaphor and Musical Thought (Chicago: University of Chicago Press, 2004),
107125. Discourse is, of course, a widely used term in the literature on musical semiotics.
11. Charles Rosen, The Classical Style: Haydn, Mozart, Beethoven (New York: Norton, 1972), 3.
10
Music as Discourse
why musical features that strike one listener as salient are judged to be peripheral
by another. I recall once listening to a recording of Schuberts Winterreise with an
admired teacher when he asked me what I thought was the most salient moment in
the beginning of Der greise Kopf (The hoary head) (example I.1). I pointed
to the high A-flat in the third bar, the high point of the opening phrase, noting
that it was loud and dissonant, marked a turning point in the melodic contour, and
was therefore rhetorically significant. He, on the other hand, heard something less
obvious: the change of harmony at the beginning of bar 11, the moment at which
the unyielding C pedal that had been in effect from the very first sound of the song
finally drops by a half-step. This surprised me initially, but I later came to admire the
qualitative difference between his construction of this particular salience and mine.
His took in the harmonic stream within the larger syntactical dimension; mine was
a statistical, surface event. The construction of salience in tonal music is especially
challenging because tonal expression relies as much on what is sounded as on what
is not sounded. The stated and the implied are equally functional. Inexperienced or
downright insensitive analysts who confine their interpretations to what is directly
observable in scores often draw their patterns of salience from what is stated rather
than what is implied; the result is a dull, impoverished, or untrue analysis.
Example I.1. Schubert, Der greise Kopf, from Winterreise, bars 116.
Etwas langsam.
Singstimme.
Der Reif hat ei - nen
Pianoforte.
12
Greis zu sein,
und
hab'
3
mich sehr ge - freu - et.
Introduction
11
12
Music as Discourse
notional purity and incorporating such things as motivic content, topical profile,
periodicity, and rhythm can strict counterpoint be brought into effective alliance
with free composition.
Chapters 2 and 3, on one hand, and 4, on the other, would therefore seem to
argue conflicting positions: whereas chapters 2 and 3 acknowledge the value of
constructing a first-level salience (based on stylistic features that are heard more
or less immediately), chapter 4 claims a more sophisticated relational approach by
peering into the subsurface. Chapter 5, Paradigmatic Analysis, adds to these competing views a semiological approach that attempts to wipe the slate clean in minimizing, but never completely eliminating, the amount of (musical) baggage that
the analyst brings to the task. The approach takes repetition, the most indigenous
of all musical attributes, as a guide to the selection of a compositions meaningful
units and speculates on the narrative path cut by the succession of repeating units.
The paradigmatic analyst in effect adopts a studiedly nave stance in dispensing
with the numerous a priori considerations that have come to burden a repertoire
like that of the Romantic period. Pretense succeeds only in a limited way, however,
but enough to draw attention to some of the factors we take for granted when we
analyze music and to impel a more direct encounter with musical form.
With these five chapters, the main theoretical exposition is over. Next follow a
number of case studies designed partly to exemplify the network of ideas exposed
in the theoretical chapters, partly to push beyond their frontiers, and partly to
engage in dialogue with other analytical approaches. First comes a study of the
narrative thread in Liszts symphonic poem Orpheus (chapter 6), followed by a
study of phrase discourse in two works by Brahms: the second movement of his
First Symphony and the Intermezzo for Piano, op. 119, no. 2 (chapter 7). The last
case studies are of narratives of continuity and discontinuity in the first movement of Mahlers Ninth (chapter 8) and in the first movement of Beethovens
String Quartet, op. 130, juxtaposed with Stravinskys Symphonies of Wind Instruments (chapter 9). The inclusion of Stravinskyand an austere work from 1920 at
thatmay seem strange at first, but I hope to show continuities with the Romantic
tradition without discounting the obvious discontinuities.
An analysts fondest hope is that something he or she says sends the reader/listener back to a particular composition or to a particular moment within it. Our
theoretical scaffoldings are useless abstractions if they do not achieve something
like this; they may be good theory but lousy analysis. Indeed, music was not made
to be talked about, according to Janklvitch.13 Talking, however, can help to reinforce the point that talking is unnecessary, while at the same time reinforcing the
belief that music was made to be (re)maderepeatedly, sometimes. Analysis, in
turn, leads us through inquiry back to the site of remaking. Therefore, I retain
some hope in the possibility that the analytical fantasies gathered here will inspire
some readers to reach for the works again; to see if their previous hearings have
been altered, enhanced, or challenged in any way; and, if they have, to seek to
incorporate some of these insights into subsequent hearings. If this happens, my
purpose will have been achieved.
13. Janklvitch, Music and the Ineffable, 79.
PA RT I
Theory
C HA P T E R
One
Music as Language
1. Roland Barthes, Mythologies, trans. Annette Lavers (New York: Hill and Wang, 1972), 115.
2. John Neubauer, The Emancipation of Music from Language: Departure from Mimesis in EighteenthCentury Aesthetics (New Haven, CT: Yale University Press, 1986), 2223.
3. Neubauer, The Emancipation of Music from Language, 40.
15
16
PART I
Theory
through the nineteenth century alongside a huge supplemental increase in worddominated (or at least word-inflected) genres like operas, tone poems, and lieder,
plus a variety of compositional experiments with language as sound, material, and
sense in the works of a number of twentieth-century composers (Stravinsky, Berio,
and Lansky)all of these provide further indication of the close rapport between
music and language.
The prevalence of language models for analysis of European music is the central concern of a 1980 article by Harold Powers, in which he cites the then recent
work of Fred Lerdahl and Ray Jackendoff as an exemplary attempt to model a
grammar of tonal music.4 Reviewing antecedents for such efforts, Powers mentions two medieval sources, the anonymous ninth-century Musica Enchiriadis
and the treatise of Johannes, circa 1100; he also mentions various discussions of
musical grammar in German theory (Dressler 1563; Burmeister 1606; Mattheson 1737, 1739; Koch 1787; Reicha 1818; Riemann 1903) and David Lidovs 1975
study of segmentation, On Musical Phrase.5 On a metalinguistic level, Powers
shows how musical analysis has borrowed from semantics (as in Deryck Cookes
The Language of Music, 1959), phonology (as in theories of South Indian classical
music), the making of propositional statements (as in Charles Boils semiological study of Tepehua thought songs), and, perhaps most significantly, grammar
and syntax (as in recent semiological applications by Ruwet, Nattiez, and Lidov).
In the quarter century since Powerss magisterial article appeared, research into
semiology, which typically indexes the linguisticity of music, has grown by leaps
and bounds, extending into areas of musical semantics, phonology, and pragmatics; embracing traditional studies that do not claim a semiotic orientation; and
expanding the repertorial base to include various non-Western musics. All of this
research tacitly affirms the pertinence of linguistic analogies for music.6
The decentering of previously canonical repertoires is one of the more dramatic outcomes of recent efforts to understand the relations between music and
language. The resulting geocultural depth is readily seen in ethnomusicological
work, which, because it is tasked with inventing other world cultures and traditions, disinherits some of the more obnoxious priorities enshrined in the discourse
about Western (musical) culture. Powerss own article, for example, unusual and
impressive in its movement across Western and non-Western repertoiresa
throwback to the comparative musicology of Marius Schneider, Robert Lach, and
Erich von Hornbostelincludes a discussion of improvisation and the mechanics of text underlay in South Asian music. A year earlier, Judith Becker and Alton
Becker had constructed a strict grammar for Javanese srepegan, a grammar that
was later revisited by David Hughes.7 The literature on African music, too, includes
several studies of the intriguing phenomenon of speech tone and its relationship
4.
5.
6.
7.
Harold Powers, Language Models and Music Analysis, Ethnomusicology 24 (1980): 160.
Powers, Language Models, 4854.
See Raymond Monelles Linguistics and Semiotics in Music for a valuable introduction to the field.
Judith Becker and Alton Becker, A Grammar of the Musical Genre Srepegan, Journal of Music
Theory 24 (1979): 143; and David W. Hughes, Deep Structure and Surface Structure in Javanese
Music: A Grammar of Gendhing Lampah, Ethnomusicology 32 (1988): 2374.
CHAPTER 1
Music as Language
17
to melody. There are studies of talking drums, including ways in which drums
and other speech surrogates reproduce the tonal and accentual elements of spoken language. And perhaps most basic and universal are studies of song, a genre
in which words and music coexist, sometimes vying for prominence, mutually
transforming each other, complementing each other, and often leaving a conceptually or phenomenally dissonant residue. The practices associated with lamentation, for example, explore techniques and territories of vocalization, from the
syllabic through the melismatic to the use of vocables as articulatory vehicles. And
by including various icons of crying (Greg Urbans phrase), laments (or dirges or
wails) open up other dimensions of expressive behavior beyondbut organically
linked tomusic and language.8
There is further evidence, albeit of an informal sort, of the music-language association. In aesthetic and evaluative discourses responding to performing and composing, one sometimes encounters phrases like It doesnt speak to me or S/he
is not saying anything.9 Metaphors of translation are also prominent. We imagine
music translated into visual symbols or images, or into words, language, or literary expression. In the nineteenth century, the common practice of paraphrasing
existing works suggested transformative rendition (saying something differently),
such as is evident in Liszts or Paganinis paraphrases of music by Beethoven and
Schubert. Ornamentation, likewise, involves the imaginative recasting of existing ideas, a process that resonates with certain oratorical functions. John Spitzer
studied Jean Rousseaus 1687 viol treatise and concluded, Rousseaus grammar of
ornamentation corresponds in many respects to the so-called morphophonemic
component of Chomskian grammars.10 Even the genre of theme and variations,
whose normative protocol prescribes a conscious commentary on an existing
theme, may be understood within the critical economy of explanation, criticism,
and metacommentary. Finally, improvisation or composing in the moment presupposes competence in the speaking of a musical language. Powers likens one
sense of improvisation to an extempore oratorical discourse while Lidov notes
that musical improvisation may be closest to spontaneous speech function.11
This highly abbreviated mappingthe fuller story may be read in, among other
places, articles by Powers (1980) and Feld and Fox (1994, which includes a magnificent bibliography), and books by Neubauer (1986) and Monelle (1992)should
be enough to indicate that the music-language alliance is unavoidable as a creative
challenge (composition), as a framework for reception (listening), and as a mechanism for understanding (analysis). Observe a certain asymmetry in the relationship,
8. Steven Feld and Aaron Fox, Music and Language, Annual Review of Anthropology 23 (1994): 2553.
9. See Ingrid Monson, Saying Something: Jazz Improvisation and Interaction (Chicago: University of
Chicago Press, 1996), 7396, for a discussion of Music, Language and Cultural Styles: Improvisation as Conversation.
10. John Spitzer, Grammar of Improvised Ornamentation: Jean Rousseaus Viol Treatise of 1687,
Journal of Music Theory 33(2) (1989): 305.
11. Powers, Language Models, 42; Lidov, On Musical Phrase (Montreal: Groupe de recherches en
smiologie musicale, Music Faculty, University of Montreal, 1975), 9, quoted in Powers, Language
Models, 42.
18
PART I
Theory
however. Measured in terms of the critical work that either term does, language
looms larger than music. Indeed, in the twentieth century, language, broadly
construed, came to assume a position of unprecedented privilege (according to
poststructuralist accounts) among the discourses of the human sciences. Salutary
reminders by Carolyn Abbate that the poststructuralist privileging of language
needs to be tempered when music is the object of analysis,12 and by anthropologist
Johannes Fabian that the aural mode needs to be elevated against the predominant
visual mode if we are not to forgo certain insights that come from contemplating
sound,13 have so far not succeeded in stemming the ascendancy of verbal domination. As Roland Barthes implies, to think and talk about music isnecessarily, it
would appearinevitably to fall back on the individuation of a language.14
Why should the music-as-language metaphor matter to music analysts? Quite
simply because, to put it somewhat paradoxically, language and music are as alike as
they are unlike. No two systemssemiotic or expressiveset against one another
are as thoroughly imbricated in each others practices and yet remain ultimately
separate and distinct. More important, the role of language as a metalanguage
for music remains essential and is in no way undermined by the development of
symbologies such as Schenkers graphic analysis or Hans Kellers notatedand
therefore performablefunctional analyses. The most imaginative music analysts
are not those who treat language as a transparent window onto a given musical
reality but those who, whether explicitly or implicitly, reflect on languages limitations even as they use it to convey insights about music. Languages persistence and
domination at the conceptual level is therefore never a mere given in music analysis, demanding that music surrender, so to speak; on the contrary, it encourages
acts of critical resistance which, whatever their outcome, speak to the condition of
music as an art of tone.
A few of these efforts at resistance are worth recalling. Adorno, looking beyond
material to aesthetic value, truth content, and psychological depth, has this to say:.15
12. Carolyn Abbate, Unsung Voices: Opera and Musical Narrative in the Nineteenth Century (Princeton,
NJ: Princeton University Press, 1991), 329.
13. Johannes Fabian, Out of Our Minds: Reason and Madness in the Exploration of Central Africa
(Berkeley: University of California Press, 2000).
14. Roland Barthes, Elements of Semiology, trans. Annette Lavers and Colin Smith (New York: Hill and
Wang, 1967), 10.
15. Theodor Adorno, Music and Language: A Fragment, in Quasi una Fantasia: Essays on Modern
Music, trans. Rodney Livingstone (London: Verso, 1992), 1.
CHAPTER 1
Music as Language
19
This string of negatives probably overstates an essential point; some will argue that
it errs in denying the possibilities set in motion by an impossible alliance. But the
work of music theory in which this statement appears is concerned with what is
specifiable, not with what occupies interstices. It comes as no surprise, then, that
the tenor of this statement notwithstanding, the authors later invoke a most valuable distinction between well-formedness and preference in order to register one
aspect of the music-language alliance. Criteria of well-formedness, which play an
essential role in linguistic grammar, are less crucial in musical grammar than preference rules. This is another way of stacking the musical deck in favor of the aesthetic. To put it simply: music is less about right and wrongalthough, as Adorno
says, these are importantthan about liking something more or less.
Jean Molino links music, language, and religion in arguing their resistance to
definition and in recognizing their symbolic residue:
16. Fred Lerdahl and Ray Jackendoff, A Generative Theory of Tonal Music (Cambridge, MA: MIT Press,
1983), 56.
20
PART I
Theory
The phenomenon of music, like that of language or that of religion, cannot be defined
or described correctly unless we take account of its threefold mode of existenceas
an arbitrarily isolated object, as something produced and as something perceived. It
is on these three dimensions that the specificity of the symbolic largely rests.17
These and numerous other claims form the basis of literally thousands of assertions about music as language. On one hand, they betray an interest in isomorphisms or formal parallelisms between the two systems; on the other, they point to
areas of inexactness, to the complexities and paradoxes that emerge at the site of
their cohabitation. They testify to the continuing vitality and utility of the music
and language metaphor, while reminding us that only in carefully circumscribed
contexts, rather than at a gross level, is it fruitful to continue to entertain the prospect of a deep linkage.
Accordingly, I plead the readers indulgence in setting forth a set of simple
propositions that might form the basis of debate or discussion in a music theory
class. Culled from diverse sources and originally formulated to guide a discussion
of the challenge of musical semiotics, they speak to aspects of the phenomenon
that have been touched upon by Powers, Adorno, Lerdahl and Jackendoff, and
Molino. I have attempted to mold them into capsule, generalizable form without,
I hope, oversimplifying the phenomena they depict.18 (By music in the specific
context of the discussion that follows, I refer to a literature, to compositions of the
common practice era which form the object of analytical attention in this book.
While a wider purview of the term is conceivable, and not just in the domain of
European music, it seems prudent to confine the reach of these claims in order to
avoid confusion.)
17. Jean Molino, Musical Fact and the Semiology of Music, trans. J. A. Underwood, Music Analysis
9(2) (1990): 114.
18. Agawu, The Challenge of Semiotics, in Rethinking Music, ed. Nicholas Cook and Mark Everist
(Oxford: Oxford University Press, 1999), 138160.
CHAPTER 1
Music as Language
21
presence in such societies of a species of rhythmic and tonal behavior that we may
characterize as music making is rarely in serious contention. Music, in short, is
necessary to us.19
There are, however, striking and sometimes irreconcilable differences in the
materials, media, modes of production and consumption, and significance of
music. Indeed, there appear to be greater differences among the worlds musics than
among its languages. Nicolas Ruwet says that all human languages are apparently
of the same order of complexity, but that is not the case for all musical systems.20
And Powers comments that the linguisticity of languages is the same from language to language, but the linguisticity of musics is not the same from music to
music.21 David Lidov see[s] no variation among languages so extreme as those
among musical styles.22 It appears that music is more radically constructed, more
artificial, and depends more crucially on context for validation and meaning. So
whereas the phrase natural language seems appropriate, natural music requires
some elaboration. While linguistic competence is more or less easily assessed,
assessing normal musical competence is a rather more elusive enterprise. Speaking a mother tongue does not appear to have a perfect correlative in musical practiceis it the ability to improvise competently within a given style, to harmonize
a hymn tune or folk melody in a native idiom, to add to the repertoire of natural
songs when given a text or a situation that brings on a text, to complete a composition whose second half is withheld, or to predict the nature and size of a gesture in
a particular moment in a particular composition? Is it, in other words, a creative
or compositional ability, a discriminatory or perceptual ability, or a performative
capability? So, beyond their mutual occurrence in human society as conventional
and symbolic media, the practices associated with language and music signal significant differences.23
2. Unlike language, which is both a medium of communication (ordinary
language) and a vehicle for artistic expression (poetic language), musical language exists primarily in the poetic realm, although it can be used for purely
communicative purposes. Please pass the marmalade uttered at the breakfast
table has a direct communicative purpose. It is a form of ordinary language that
19. Gayle A. Henrotte, Music as Language: A Semiotic Paradigm? in Semiotics 1984, ed. John Deely
(Lanham, MD: University Press of America, 1985), 163170.
20. Nicholas Ruwet, Thorie et mthodes dans les etudes musicales: Quelques remarques rtrospectives et prliminaires, Music en jeu 17 (1975): 19, quoted in Powers, Language Models, 38.
21. Powers, Language Models, 38. Perhaps the same is too strong and liable to be undermined by
new anthropological findings. See, for example, Daniel L. Everett, Cultural Constraints on Grammar and Cognition in Piraha: Another Look at the Design Features of Human Language, Current
Anthropology 46 (2005): 621646.
22. Lidov, Is Language a Music? 4.
23. The place of music and language in human evolution has been suggestively explored in a number
of publications by Ian Cross. See, for example, Music and Biocultural Evolution, in The Cultural
Study of Music: A Critical Introduction, ed. Martin Clayton, Trevor Herbert, and Richard Middleton (New York: Routledge, 2003), 1930. Also of interest is Paul Richardss adaptation of some of
Crosss ideas in The Emotions at War: Atrocity as Piacular Rite in Sierra Leone, in Public Emotions,
ed. Perri 6, Susannah Radstone, Corrine Squire and Amal Treacher (London: Palgrave Macmillan,
2006), 6284.
22
PART I
Theory
would normally elicit an action from ones companion at the table. Let me not
to the marriage of true minds admit impediments, by contrast, departs from the
ordinary realm and enters another in which language is self-consciously ordered
to draw attention to itself. This is poetic language. Whereas ordinary language is
unmarked, poetic language is marked. Like all such binary distinctions, however,
that between ordinary and poetic is not always firm. There are levels of ordinariness in language use; certain ostensibly linguistic formulations pass into the realm
of the poetic by opportunistic acts of framing (as in the poetry of William Carlos
Williams) while the poetic may inflect what one says in everyday parlance (the Ewe
greet each other daily by asking, Are you well with life?). So although the distinction is not categorical, it is nevertheless useful at low levels of characterization.
The situation is more ambiguous in music, for despite the sporadic evidence
that music may function as an ordinary medium of communicationas in the
talking drums of West and Central Africa, or in the thought-songs recorded by
Charles Boils among the Tepehua of Mexicomusics discursive communicative
capacity is inferior to that of language. In a magisterial survey of the music-language phenomenon, Steve Feld and Aaron Fox refer to the informational redundancy of musical structures, thus echoing observations made by aestheticians
like Leonard Meyer and others.24 And several writers, conscious of the apparently
asemantic nature of musical art, speak only with hesitation about musics communicative capability. It appears that the predominantly aesthetic function of music
compares only with certain heightened or designated uses of language.
Musics ordinary language is thus available only as a speculative projection.
One might think, for example, of an overt transition in, say, a Haydn sonata, which
in the moment suggests a shifting of gears, a lifting of the action onto a different
plane, an intrusion of craft, an exposure of seamsthat perhaps such transitions
index a level of ordinary usage in music. They command attention as mobile rather
than presentational moments. They are means to other, presumably more poetic,
ends. But one must not underestimate the extent to which a poetic impetus infuses
the transition function. The work of a transition may be the moment in which
music comes into its own, needing to express a credible and indigenously musical
function. Such a moment may be suffused with poetry. Attending to a functional
imperative does not take the composer out of his contemplative realm.
A similar attempt to hear ordinariness and poetry in certain operatic functions
conveys the complexity of the application. Secco recitative in conventional understanding facilitates the quick delivery of words, thus speeding up the dramatic
action, while aria slows things down and enables the beauty of musicwhich is
inseparable from the beauty of voiceto come to the fore. Recitative may thus
be said to perform the function of ordinary language while aria does the work
of poetic language. This alignment is already complicated. The ordinary would
seem to be musically dispensable but dramatically necessary, while the poetic is
not dispensable in either sense. This is another way of restating the musical basis
of the genre.
CHAPTER 1
Music as Language
23
Think, also, of functional music in opera. Think of the moment toward the end
of act 2 of Puccinis Tosca when Scarpia, in return for a carnal favor, consents to
give Tosca safe conduct so that she and Cavaradossi can leave the country. While
Scarpia sits at his desk writing the note, the musical narrative must continue.
Puccini provides functional, time-killing music, music which serves as a form of
ordinary language in contrast to the more elevated utterances sung by Tosca and
Scarpia both before and after this moment. Opera, after all, is music drama; the
foundation of operatic discourse is music, so the functional aspect of this moment
can never eclipse its contemplative dimension. The music that serves as time-killing music is highly charged, beautiful, and burdened with significance. It is as
poeticif not more so, given its isolationas any music in Tosca. So while Puccini
may be heard speaking (or singing, or communicating) in ordinary language,
his utterance is fully poetic. The fact that music is not a system of communication should not discourage us from exploring the messages that music sometimes
(intermittently) communicates. It is precisely in the tension between the aesthetic
and communicative functions that music analysis finds an essential challenge to its
purpose, its reason for being.25
3. Unlike language, music exists only in performance (actual, idealized, imagined, remembered). The claim that language as social expression is ever conceivable outside of the context of performance may appear counterintuitive at first.
When I greet you, or say a prayer in the mosque, or take an oath, am I not performing a text? And when I read a poem or novel to myself, am I not similarly engaged
in a performance, albeit a silent one? The system of language and the system of
music exist in a synchronous state, harboring potential relationships, relationships
waiting to be released in actual verbal or musical compositions. As soon as they
are concretized in specific compositions, they inevitably enshrine directions for
performance. However, partly because of languages communicative functionsas
contrasted with musics aesthetic functionpartly because it is possible to make
true or false propositional statements in language, and partly because of its domination of our conceptual apparatus, language appears to display a wider range of
articulatory possibility than does music, from performed or heightened to marked,
ordinary, or unmarked.
Music, by contrast, is more restricted in its social tendency, more marked when
it is rendered, and possibly silent when it is not being performed or remembered.
A musical work does not exist except in the time of its playing, writes Janklvitch.26 It is true that some musicians have music on the brain all the time, and it
is also true that some trained musicians can hear notated music in their heads
although this hearing is surely a hearing of something imagined, itself possible
only against a background of a prior (remembered) hearing if not of the particular
composition then of other compositions. It appears that the constraints placed on
25. I have elsewhere used this same example to undermine the distinction, sometimes drawn by
ethnomusicologists writing about African music, between functional music and contemplative
music. See Agawu, Representing African Music: Postcolonial Notes, Queries, Positions (New York:
Routledge, 2003), 98107.
26. Janklvitch, Music and the Ineffable, 70.
24
PART I
Theory
music making are always severe, presumably because the ontological modes of
verbal behavior are more diffuse. The difference in performability between music
and language is finally relative, however, not absolute.
4. Like language (in its manifestation as speech), music is organized into
temporally bounded or acoustically closed texts. Whether they be oral texts (like
greetings, oaths, prayers) or written texts (poems, novels, speeches, letters), verbal
texts share with musical texts a comparable internal mode of existence. Certain
otherwise pertinent questions about the identity of a musical work are, at this level,
rendered irrelevant by the undeniable fact that a text, work, or composition has,
in principle, a beginning, a middle, and an ending. At this level of material and
sonic specificity, the beginning-middle-ending scheme represents only the order
in which events unfold, not the more qualitative measure of the function of those
parts. The work of interpretation demands, however, that once we move beyond this
material level, compositions be reconfigured as open texts, conceptually boundless
fields, texts whose necessary but in a sense mundane temporal boundaries are not
necessarily coterminous with their sense boundaries.27
5. A musical composition, like a verbal composition, is organized into discrete
units or segments. Music is, in this sense, segmentable. Understanding a temporal
phenomenon is only possible if the whole, however imagined or conceptualized,
is grasped in terms of its constituent parts, units, or segments. Many writings in
music theory, especially the great seventeenth- and eighteenth-century treatises on
rhetoric and music (Burmeister, Bernhard, Mattheson) are either premised upon
or develop an explicit view of segments. And many later theories, be they prescriptive and compositional (Koch and Czerny) or descriptive and synthetic (Tovey and
Schenker), lay great store by building blocks, minimal units, basic elements, motives,
phrases, periodsin short, constitutive segments. In thus accounting for or prescribing the discourse of a composition, theorists rely on a conception of segmentation as an index of musical sense or meaning. Now, the issue of musics physical
segmentability is less interesting than what might be called its cultural or psychological segmentability. The former is a quantitative or objective measure, the latter a
heavily mediated qualitative or subjective one. Culturally conditioned segmentation
draws on historical and cultural data in a variety of formal and informal discourses
to determine a works significant sense units and their mode of succession. The
nature of the units often betrays allegiance to some metalanguage or other, as when
we speak of musical logic, developing variation, mixture, or octatonic collection.
6. Although segmentable, the musical composition is more continuous in its
real-time unfolding than is a verbal composition. The articulatory vehicles of verbal and musical composition differ, the former marked by virtual or physical rests
and silences, the latter by real or imagined continuities. The issue of continuity is
only partly acoustical. More important are the psychological and semantic sources
of continuity. Lacking an apparent semantic dimension that can activate certain
27. On open texts, see Umberto Eco, The Poetics of the Open Work, in Eco, The Role of the Reader:
Explorations in the Semiotics of Texts (Bloomington: Indiana University Press, 1979), 4766. For an
incisive discussion, see Nattiez, Music and Discourse, 69101.
CHAPTER 1
Music as Language
25
26
PART I
Theory
30. Cooke, The Language of Music (Oxford: Oxford University Press, 1959).
31. Cooke, The Language of Music, 115, 140.
32. Hans von Blow, Preface to C. P. E. Bach, Sechs Klavier Sonaten (Leipzig: Peters, 1862), 3.
CHAPTER 1
Music as Language
27
28
PART I
Theory
In painting words, the composer findsoften inventsan iconic sign for a nonmusical reality. Relying upon these musical-verbal dictionaries,37 composers and
listeners constrain musical elements in specified ways in order to hear them as one
thing or another. While such prescription does not eliminate alternative meanings
for the listener, it has a way of reducing the potential multiplicity of meanings and
directing the willing listener to the relevant image, narrative, or idea. Extrinsic
meaning therefore depends on layers of conventional signification.
Intrinsic meaning, too, depends on an awareness of convention, but because
we often take for granted our awareness of conventions, we tend to think of intrinsic meanings as internally directed and of immediate significance. A dominantseventh chord indexing an immediate or postponed tonic, a rising melodic gap
filled by a complementary stepwise descent, an opening ritornello promising a
contrasting solo, or an inaugural chromatic pitch bearing the full potential of later
enharmonic reinterpretationthese are examples of intrinsic meaning, meaning
that is apparently grasped without recourse to external, nonmusical, knowledge.
The extrinsic-intrinsic dichotomy is ultimately false, however, for not only
do intrinsic meanings rely on certain conventional constructs, but their status as
intrinsic is continually transformed in the very moment that we apprehend their
signifying work. It requires external knowledgeor, at least, conventional knowledgeto expect a dominant-seventh to move to the tonic; it could just as readily
move to the submediant; also, depending on its position and the local voice-leading situation, its behavior may be modified accordingly. Although some theories
claim nature as the origin of some of their central constructssuch as the major
triad or the notion of consonancenot until there has been cultural intervention
is the natural made meaningful. The extrinsic-intrinsic dichotomy, then, enshrines
an opposition that is only apparent, not real. Indeed, oppositions like this are common throughout the literature, including subjective-objective, semantic-syntactic,
extramusical-(intra)musical, extroversive-introversive, extrageneric-congeneric, exosemantic-endosemantic, expression-structure, and hermeneutics-analysis. As points
of departure for the exploration of musical meaning, as tools for developing provisional taxonomies, such dichotomies may be helpful in distributing the basic features
of a given composition. But beyond this initial stage of the analysis, they must be
used cautiously, for the crucial issue is not whether a given composition has meaning
extrinsically or intrinsically but in what sense one or the other term applies.
10. Whereas language interprets itself, music cannot interpret itself. Language
is the interpreting system of music.38 If musical units have no fixed meanings, if
the semantic element in music is merely intermittent,39 if it is not possible to
make a propositional statement in music, and if music is ultimately untranslatable,
then music cannot interpret itself. There are, to be sure, intertextual resonances
in music that might be described in terms of interpretive actions. For example, a
variation set displays different, sometimes progressively elaborate and affectively
37. Powers, Language Models, 2.
38. Benveniste, The Semiology of Language, 235.
39. Carl Dahlhaus, Fragments of a Musical Hermeneutics, trans. Karen Painter, Current Musicology
50 (1991): 520.
CHAPTER 1
Music as Language
29
30
PART I
Theory
order to simplify aspects of the analysis. The choice of instrumental (or untexted)
music should require no further justification at this point, except to say that if
musical analysis is obliged to deal with musical problems, then dispensingif only
temporarilywith the influence of words, drama, or dance may be advantageous.
The purpose of the analysis is to pinpoint a few salient features of each composition and to suggest some of the meanings to which they give rise. It will be obvious,
I trust, that although no attempt is made to apply the principles enshrined in the
ten propositions discussed earlier, the following observations about structure are
largely consistent with the ontology of music implicit in the propositions.43
CHAPTER 1
Music as Language
31
Example 1.1. Schubert, Piano Sonata in C Minor, D. 958, Adagio, bars 118.
Adagio.
sempre ligato
pp
14
pp
11
pp
32
PART I
Theory
musicians, meaning in this Schubert work is intimately tied to the tonal tendencies
in each phrase, and the sense of each phrase is, in turn, conveyed by the degree of
closure it exhibits. The first of these punctuation marks occurs in bars 34, where
the half-cadence pronounces the first 4 bars unfinished, incomplete, opena
promissory note. The status of the dominant chord in bar 4 is elevated in bars 78
through tonicization. By acquiring its own dominant in bars 67, the dominant
on the downbeat of bar 8 displays an egotistical tendency, as Schenker would say,
demanding recognition for itself rather than as a mere accessory to the reigning
tonic. But this moment in the sun is short-lived as D-natural is replaced by D-flat
in bar 8 to transform the E-flat major chord into a dominant-seventh of A-flat. The
join between bars 8 and 9 is smoothed over, and in no time we are back where we
started: bar 9 is the same as bar 1. Notice how the bass progression by fifths in
58 (CFBE) offers us a heightened view of the tonicized dominant in bar 8.
Thus, what was said at the conclusion of bars 14 is intensified in the course of
bars 58.
At 9, the motion begins again. By this act of beginning again, Schubert lets us
understand that the destination of the next 8 bars will be different, that whereas
the two previous 4-bar phrases (14 and 58) remained open, closure is now a
definite possibility. But for the registrally elevated lower voices, all is as before
from the downbeat of bar 9 through the third eighth-note of bar 11, after which
point Schubert slips into the subdominant (bar 12, downbeat), extending the
chord through its own minor subdominant. This is not a destination we would
have inferred from the beginning of the piece, and the manner of its attainment
tells us that the D-flat major chord is on its way somewhere, that it is not a goal
but a station. Schubert confirms the transitory nature of bar 12 by composing past
it into the cadence at bars 1314. In retrospect, we might hear the pause on the
subdominant in 12 as a pause on the predominant sonority within a larger cadential group, IVV7I across bars 1214. That the motion would end eventually in
a satisfying close we took for granted from the beginning; but with what imagination and play this would be accomplished, we could not have predicted. With the
cadence in 1314, the composition reaches an end, a definitive end, perhaps. Alas,
Schubert is not done yet. He opens up a new, upper register, repeats the progression of bars 1112, leading to the predominant at 1516, abandons that register,
and returns to the lower register in order to close, using the cadence figure from
1314 in 1718 to mark the end of the composition as a whole.44
With this first pass through the composition under the guide of its cadences,
we have begun to broach the nature of Schuberts tonal imagination and the nature
of musical meaning. There is more to the tonal life than the sense conveyed by
cadences, however. The golden rule for exploring tonal meaning is to be mindful
of the origin and destination of every event, to understand that no moment stands
in isolation. Tonal composing is premised on an always-connected ideology that
governs the community of tones. This is not to deny that some connections can be
interpreted in terms of contextual discontinuities; it is only to claim a premise of
44. The play with register hinted at here is brought to a spectacular finish at the end of the movement.
CHAPTER 1
Music as Language
33
connectedness. A chosen event comes from somewhere and leads elsewhere. Within
this broader goal-directedness, an engaging network of motions unfolds at more
local levels. Structural entities are elaborated, decorated, or extended in time. Some
events emerge as hierarchically superior within individual prolongational contexts.
Subsequent chapters in this book will explore the nature of tonal meaning, but
we can make a second start on the terrain of Schuberts sonata by observing the
process by which structural entities are elaborated. Example 1.2 rewrites Schuberts
18-bar composition as a series of 11 building blocks or units.45 Each expresses a
tonal-contrapuntal motion; therefore, each is akin to an item of vocabulary. My
presentation of each fragment begins with its putative structural underpinning
and then hypothesizes a series of transformations that set Schuberts surface into
relief. In this way, tonal meaning is understood in reference to tonal composition,
not external association. The best way to appreciate example 1.2 is to play through
it at the piano and observe what Schubert does with simple processes like extension of a chord (units 1, 6), voice exchange (units 2, 7), half-cadence (unit 3), full
cadence (units 5, 9, and 11), progression from tonic to subdominant (on its way to
a cadential dominant; units 8, 10), and circle-of-fifths progression (unit 4).
With these speculative derivations, we can claim to have accounted in one way
or another for all of the local harmonic and voice-leading motions in the composition. Here is a summary:46
Bars 121 = extension of tonic, II6
Bars 2131 = extension of tonic by voice exchange
Bars 314 = progression from IV
Bars 58 = cadence in V
Bars 891 = V7I progression
Bars 9101 = bars 121
Bars 101111 = bars 2131
Bars 11112 = progression from IIV, on its way to V (double meaning)
Bars 1314 = cadence in I
Bars 1516 = bars 1112
Bars 1718 = bars 1314
Understanding these ways of proceeding forms the basis of tonal understanding.
The analyst imagines Schuberts decision-making process from the point of view of
the finished work. There is no recourse to biography or other external information
45. Here and in subsequent examples, the units of a composition are numbered using a simple ordinal
scheme: 1, 2, 3, etc. The main advantage of this practice is the neutrality it confers on the succession
of units. This is part of a larger strategy to escape the overdetermination of a more conventional
apparatus. For an important (but by no means isolated) precedent, see Derrick Puffett, Bruckners
Way: The Adagio of the Ninth Symphony, Music Analysis 18 (1999): 5100. Puffetts units are designated as periods, and there are 33 of them in this movement. He also cites an earlier instance of
period analysis in Hugo Leichtentritts Musical Form (Cambridge, MA: Harvard University Press,
1951; orig. 1911).
46. Superscripts are used to locate a specific beat within the bar. Thus, 21 denotes beat 1 of bar 2, 22 is
beat 2 of bar 2, and so on.
34
PART I
Theory
1, 6
becomes
becomes
2, 7
becomes
becomes
becomes
becomes
becomes
becomes
becomes
becomes
8, 10
9, 11
becomes
becomes
11
becomes
13
at this level of the analysis, only an understanding of the possibilities of manipulating the language.
When we first noted the different registral placement of the nearly identical
material in bars 1112 and 1516, we retreated from assigning structural function
to registral differentiation. Register as a dependent dimension is not thought to have
a syntax. But without contesting this theoretical fact, we can see from the ways in
CHAPTER 1
Music as Language
35
which Schubert manipulates register during the return of the opening (example 1.3)
that speaking in terms of a registral discourse may not be hyperbolic. It is the destination of the tonal narrative in association with register that is of interest here.
Example 1.3. Schubert, Piano Sonata in C Minor, D. 958, Adagio, bars 102115.
ppp
pp
104
un poco cresc.
107
109
36
PART I
Theory
progression ii6v6/45/3. When juxtaposed, the two cadences carry the respective senses of question and answer, open utterance followed by closed one. Indeed,
from the beginning of the composition, we have been offered a dualistic gesture as
rhetorical figure and premise. The unison hunt-style figure that suggested trumpets or horns (bars 12) is answered by the more delicate figures of bars 34, perhaps a hint at the Empfindsamer style. Similarly, the sequential repetition of bars
12 as 56 poses another question that is answered immediately by the equivalent of bars 34, namely, bars 78. In terms of textural design, then, the period is
symmetrical: 4 + 4 subdivided into (2 + 2) + (2 + 2). Phrase division of this sort is
part of the unmarked or ordinary syntax of the classical style. From this point of
view, the period is regular and unproblematic.
Example 1.4. Mozart, Piano Sonata in D Major, K. 576, first movement, bars 18.
Allegro.
Beneath this trim, balanced, indeed classical exterior, however, there lurk a
number of meaningful tensions that grow out of the closing tendencies of Mozarts
materials. Three such tensions may be identified. The first resides in the play of
register. In the initial 2 + 2 pairing, the second pair answers the first in a higher
but contiguous register. The effect of registral change is readily felt if we rewrite the
4-bar phrase within a single register (example 1.5). Although it preserves the question-and-answer pattern together with topical and textural contrasts, this hypothetical version mutes the sense of a new plane of activity in bars 34. The second
4-bar phrase rehearses the registral patterning in the first, so we hear the contrast
of register as an emerging contextual norm. The norm, in other words, enshrines
a residual registral dissonance: the question posed at home, the answer delivered
away from home. When aligned with the syntactically normative, repeated IV
harmonic progression, an interdimensional dissonance results.
Example 1.5. Registral modification of bars 34 in Mozart, Piano Sonata in D
Major, K. 576, first movement.
CHAPTER 1
Music as Language
37
But this is only the most superficial of animating tensions, indeed one that
might be dismissed entirely by those who regard register as a secondary rather
than primary parameter. (According to Leonard Meyer, primary parameters in the
common practice era are harmony, melody, and rhythm, while dynamics, register,
and timbre are secondary.)47 Register apparently resists a syntactical reading. Consider, though, a second source of tension, namely, the events in bar 7. Had Mozart
continued the phrase mechanically, he would have written something like what is
shown in example 1.6, thereby preserving the rhythmic profile of bars 34. But he
eschews a mechanical answer in favor of a little display of invention via his variation technique.48 The charming figure in bar 7 seems literally to escape from the
phrase and to run toward a little infinity; it suggests an improvisation, an utterance
with an otherworldly aura. This heightened sense is set into relief by the decidedly
ordinary cadential chords in bars 78, which remind us of where we are in the
formal process. The cadence is unmarked, dutifully executed in fulfillment of a
syntactic obligation; it is part of Mozarts ordinary language.
but
38
PART I
Theory
Cross-referencing seems as significant as driving to a cadence. To speak implicitly of discontinuity between, say, bars 12 and 34 may seem exaggerated, but it
would be an interpretive deficit to discount resistance to continuity by acknowledging only our privileged lines and voice-leading paths.
There is yet another source of tension, one that takes us into the realm of tonal
tendency. Bar 5, too, is marked rather than unmarked. This is not the consequent
we would normally expect to the 4-bar antecedent. Since bars 14 began on I and
finished on V, bars 58 might begin on V and return to I, or begin on I and return to
it via V. To begin on ii, as Mozart does here, is to create a sense of a differentperhaps more extendedtemporal and tonal trajectory. It is as if we were beginning
a sequential motion in 4-bar units, one that would presumably result in a 12-bar
period. Thus, the initiating AD motion (bar 1, including upbeat) is followed by
BE (bar 5, including upbeat), and would presumably continue as CF; eventually, of course, the pattern will have to be broken. The promise of a sequence is not
fulfilled, however. Mozart turns the phrase inward and releases an ordinary functional cadence in bars 78. The effect of bars 56, then, is to reorient the overall
harmonic trajectory. The clear IV progression that shapes bars 14 is answered
by an expanded iiVI cadence in bars 58, with the ii occupying three-quarters
of the complementary phrase. It is as if the motion were begun and suspended at V
(bars 14) and then resumed and brought to completion in the complementary ii
VI progression (bars 58). To subsume this quite palpable drama under a simple
IVI global progression, as Felix Salzers Schenkerian reading does,49 would be
canonically correct, of course, but it would fail to convey the sharply etched shapes
that give Mozarts composition its distinct rhetorical character and meaning.
More could be said about these two compositions by Schubert and Mozart,
but what has been said should suffice in this introductory context. I hope that the
chapter has, first, provided a general orientation to the differences between music
and language, and second, through these preliminary analytical ventures, indicated
some of the issues raised in an analysis of musical meaning. Some readers may
sense a gapa phenomenological gap, perhapsbetween the more general discussion of music as language and the specific analytical discussion that followed.
This is partly a question of metalanguage. The language of music analysis often
incorporates nomenclature different from ordinary language, and this, in turn,
is motivated in part by the need to refer to specific portions of the musical text,
the object of attention. The score, however, is not a simple and stable object but a
nexus of possibilities. Analytical language carries a host of assumptions about the
analysts conception of musical languagehow notes connect, how tonal meanings are implied, how a sense of ending is executed, and so on. The point to be
emphasized here is that the search for meaning and truth content in any composition is not productive without some engagement with the musical material and its
technical structure. It should be clear, too, that probing the technical structure of
a composition is a potentially never-ending process. The answers obtained from
49. Felix Salzer, Structural Hearing: Tonal Coherence in Music, vol. 2 (New York: Dover, 1952), 95.
CHAPTER 1
Music as Language
39
such an exercise are provisional, never final. Ideally, such answers should engender
other, more complex questions, the process continuing ad infinitum. Final-state
declarations about the meaning of this or that composition should be approached
with the greatest care so as not to flatten, cheapen, or undercomplicate what the
work of art makes possible. Readers may wish to keep this ideal in mind while
reading the analyses in subsequent chapters, where the imperatives of theorybuilding sometimes enjoin us to curtail exploration in order to frame the (necessarily provisional) outcomes for critical assessment.
C HA P T E R
Two
Criteria for Analysis I
If music is like language but not identical to it, how might we formulate a description of its material content and modes of organization that captures its essence as
an art of tone within circumscribed historical and stylistic contexts? The purpose
of this chapter and the next is to provide a framework for answering this question.
To this end, I have devised six rubrics for distributing the reality of Romantic music:
topics or topoi; beginnings, middles, and endings; high points; periodicity (including discontinuity and parentheses); three modes of enunciation, namely, speech
mode, song mode, and dance mode; and narrative. The first three are discussed in
this chapter, the other three in chapter 3. Together, they facilitate an exploration of
the immediately perceptible dimensions of Romantic compositions. Not every criterion is pertinent to every analytical situation, nor are the six categories nonoverlapping. But for each compositional situation, one, two, or some combination of the
six can help to convey salient aspects of expression and structure. It is best, then, to
think of the criteria variously as enabling mechanisms, as schemes for organizing
intuited insights, and as points of departure for further exploration. Romantic repertoires are of course vast and diverse, but to claim that there is no consistent principle of structure that governs Romantic music may be to undervalue a number of
recurring strategies.1 We need to find ways to manage and characterize heterogeneity, not to contain, mute, or erase it. We need, in short, to establish some conditions
of possibility by which individual students can pursue in greater analytical detail
the effervescent, evanescent, and ultimately plural signification of Romantic music.
Topics
Setting out the compositional and stylistic premises of music in the classic era,
Leonard Ratner draws attention to its mimetic qualities:
1. Leonard G. Ratner, Music: The Listeners Art, 2nd ed. (New York: McGraw-Hill, 1966), 314.
41
42
PART I
Theory
From its contacts with worship, poetry, drama, entertainment, dance, ceremony,
the military, the hunt, and the life of the lower classes, music in the early 18th century developed a thesaurus of characteristic figures, which formed a rich legacy for
classic composers. Some of these figures were associated with various feelings and
affections; others had a picturesque flavor. They are designated here as topicssubjects for musical discourse.2
Ratners topics include dances like minuet, contredanse, and gavotte, as well as
styles like hunt, singing, fantasia, and Sturm und Drang. Although the universe
of eighteenth-century topics is yet to be formulated definitively as a fixed, closed
category with an attendant set of explicit discovery procedures, many of Ratners
core topics and his ways of reading individual compositions have served in recent
years to enliven interpretations of music by Mozart, Haydn, Beethoven, and their
contemporaries. The concept of topic provides us with a (speculative) tool for the
imaginative description of texture, affective stance, and social sediment in classic
music.3
A comparable exercise of establishing the compositional and stylistic premises
of Romantic music, although challenging in view of the greater heterogeneity of
compositional ideals in the latter repertoire, would nonetheless confirm the historical persistence of topoi. Chorales, marches, horn calls, and various figures of
sighing, weeping, or lamenting saturate music of this era. There is, then, a level of
continuity between eighteenth- and nineteenth-century styles that would undermine historical narratives posited on the existence of a categorical distinction
between them. It would be equally problematic, however, to assert a straightforward historical continuity in the way topics are used. On one hand, the largely
public-oriented and conventional topics of the eighteenth century often exhibit a
similar orientation in the nineteenth century. For example, the communal ethos
or sense of unanimity inscribed in a topic like march remains largely invariant.
The slow movement of Beethovens Eroica Symphony; Mendelssohns Song without Words in E minor, op. 62, no. 3; the little march with which the protagonist of
Schumanns Help me sisters, from Frauenliebe und Leben, no. 6, projects the joy
of a coming wedding; Liszts Rkozcy March; the Pilgrims march from Berliozs
Harold in Italy; and the determined opening movement of Mahlers Sixth Symphonyall are united by a mode of utterance that is irreducibly social and communal, a mode opposed to aloneness. On the other hand, the ascendancy in the
nineteenth century of figures born of a private realm, figures that bear the marks
2. Leonard G. Ratner, Classic Music: Expression, Form, and Style (New York: Schirmer, 1980), 9.
3. On topics in classic music, see Ratner, Classic Music; Wye Jamison Allanbrook, Rhythmic Gesture in
Mozart: Le nozze di Figaro and Don Giovanni (Chicago: University of Chicago Press, 1983); Agawu,
Playing with Signs: A Semiotic Interpretation of Classic Music (Princeton, NJ: Princeton University Press,
1991); Elaine Sisman, Mozart: The Jupiter Symphony (Cambridge: Cambridge University Press, 1993);
Hatten, Musical Meaning in Beethoven; Monelle, The Sense of Music; Raymond Monelle, The Musical
Topic: Hunt, Military and Pastoral (Bloomington: Indiana University Press, 2006); and William E.
Caplin, On the Relation of Musical Topoi to Formal Function, Eighteenth-Century Music 2 (2005):
113124.
CHAPTER 2
43
of individual composerly idiolects, speaks to a new context for topic. If topics are
commonplaces incorporated into musical discourses and recognizable by members of an interpretive community rather than secret codes to be manipulated
privately, then the transition into the Romantic period may be understood not as
a replacement but as the incorporation of classic protocol into a still more variegated set of Romantic discourses.
In order to analyze a work from the point of view of its topical content, one
needs access to a prior universe made up of commonplaces of style known to
composers and their audiences. Topics are recognized on the basis of prior
acquaintance. But recognition is an art, and there is simply no mechanical way of
discovering topics in a given work. Topics are therefore also constructions, not
naturally occurring objects. Without deep familiarity with contemporaneous as
well as historically sanctioned styles, it is simply not possible to know what the
categories are nor to be able to deploy them imaginatively in analysis.
Few students of classical music today grew up dancing minuets, bourres,
and gavottes, marching to janissary music, or hearing fanfares played on hunting
horns. Only a few are skilled at paraphrasing existing compositions, improvising
keyboard preludes, or setting poetry to music in a consistent personal idiom, not
in preparation for professional life as a composer but to enhance a general musical
education. In other words, thorough grounding in the sonic residue of late eighteenth- and nineteenth-century styles, which constitutes a prerequisite for effective topical analysis, is not something that can be taken for granted. Lacking this
background, we need to construct a universe of topics from scratch.
For the more extensively researched classic repertoire, a universe of topic is
already implicit in the writings of Ratner, Allanbrook, Hatten, and Monelle. I list
61 of the more common topics here without elaboration simply to orient readers to the worlds of affect, style, and technique that they set in motion. Some are
everyday terms used by musicologists; others are less familiar terms drawn from
various eighteenth-century sources. All occur in various compositionssome
well known, others obscure.
The Universe of Topic for Classic Music
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Alberti bass
alla breve
alla zoppa
allemande
amoroso style
aria style
arioso
bound style or stile
legato
bourre
brilliant style
buffa style
cadenza
chaconne bass
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
chorale
commedia dellarte
concerto style
contredanse
ecclesiastical style
Empfindsamer style
Empfindsamkeit
(sensibility)
fanfare
fantasia style
French overture
style
fugal style
fugato
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
galant style
gavotte
gigue
high style
horn call
hunt style
hunting fanfare
Italian style
Lndler
learned style
Lebewohl (horn
figure)
37. low style
38. march
44
39.
40.
41.
42.
43.
44.
45.
46.
47.
PART I
Theory
middle style
military figures
minuet
murky bass
musette
ombra style
passepied
pastorale
pathetic style
48. polonaise
49. popular style
50. recitative (simple,
accompanied,
oblig)
51. romanza
52. sarabande
53. siciliano
54. singing allegro
How is this universe domesticated within a given work? Because there exist,
by now, several considered demonstrations of topical analysis, we can pass over
the eighteenth-century portions of this discussion rapidly by simply mentioning
the topical content of four canonical movements by Mozart. The first movement
of the Piano Sonata in F Major, K. 332, includes aria style, singing style, Alberti
bass, learned style, minuet, horn duet, horn fifths, Sturm und Drang, fanfare,
amoroso style, bound style, and brilliant style. The first movement of the Piano
Sonata in D Major, K. 284, incorporates (references to) concerto style, murky bass,
singing style, Trommelbass, brilliant style, march, recitative oblig style, fanfare,
and bound style. The first movement of the Jupiter Symphony, K. 551, includes
fanfare, march, singing style, Sturm und Drang, contredanse, and learned style.
And the introduction to the Prague Symphony, K. 504, includes (allusions to)
French overture, Empfindsamkeit, singing style, learned style, fanfare, and ombra.
Where and how these topics are used and their effect on overall expression and
structure are the subjects of detailed studies by Allanbrook, Ratner, Sisman,
Silbiger, and myself.4
Topical analysis begins with identification. Compositional manipulations of
rhythm, texture, and technique suggest certain topical or stylistic affiliations, and
the analyst reaches into his or her stock (or universe of topics) assembled from
prior acquaintance with a range of works in order to establish correlations. Identification is followed by interpretation. Topics may enable an account of the form
or inner dynamic of a work, its expressive stance, or even its structure. The use
of identical or similar topics within or between works may provide insight into
a works strategy or larger aspects of style. And the shapes of individual topics
4. For a fuller discussion of topics in K. 332, see Wye J. Allanbrook, Two Threads through the
Labyrinth, in Convention in Eighteenth- and Nineteenth-Century Music: Essays in Honor of Leonard
G. Ratner, ed. Wye J. Allanbrook, Janet M. Levy, and William P. Mahrt (Stuyvesant, NY: Pendragon,
1992), 125171; and Alexander Silbiger, Il chitarrino le suoner: Commedia dellarte in Mozarts
Piano Sonata K. 332 (paper presented at the annual meeting of the Mozart Society of America,
Kansas City, November 5, 1999). On K. 284, see Leonard G. Ratner, Topical Content in Mozarts
Keyboard Sonatas, Early Music 19(4) (1991): 615619; on K. 551, see Sisman, Mozart: The Jupiter
Symphony; and on K. 504, see Ratner, Classic Music, 2728 and 105107; Agawu, Playing with Signs,
1725; and Sisman, Genre, Gesture and Meaning in Mozarts Prague Symphony, in Mozart Studies,
vol. 2, ed. Cliff Eisen (Oxford: Oxford University Press, 1997), 2784.
CHAPTER 2
45
may enhance appreciation of the sonic quality of a given work and the nature of a
composers rhetoric.
Constructing a comparable universe for Romantic music would fill more
pages than we have at our disposal. Fortunately, the material for constructing
such a universe may be gleaned from a number of books and articles. Ratners
book on Romantic music, although not deeply invested in notions of topic, draws
attention to those aspects of compositions that signify in the manner of topics
even while stressing the role of sheer sound, periodicity, texture, and harmony
in this repertoire.5 In two related books, Raymond Monelle has given due consideration to ideas of topic. The first, The Sense of Music, supplements a critical account of Ratners theory with a set of semiotic analyses of music by Bach,
Mahler, Tchaikovsky, and others. The second, The Musical Topic: Hunt, Military
and Pastoral, explores the musical and cultural contexts of three musical topics
through a broad historical landscape. Monelles interest in the latter book is not
in reading individual compositions for possible topical traces (although he provides a fascinating description of hunts in instrumental music by Mendelssohn,
Schumann, Paganini, Franck, and Bruckner),6 but in assembling a kaleidoscope of
contexts for topics. While The Sense of Music constitutes a more or less traditional
(but highly suggestive) music-analytical exercisecomplete with the theoretical
self-consciousness that became evident in musical studies during the 1990sThe
Musical Topic moves in the direction of the larger humanistic enterprise known
as cultural studies, emphasizing an array of intertextual resonances. There are,
however, writers who remain committed to close readings of musical works
informed by topic. One such writer is Robert Hatten, whose Interpreting Musical
Gestures, Topics, and Tropes: Mozart, Beethoven, Schubert extends the interpretive
and reflective exercise begun in his earlier book, Musical Meaning in Beethoven:
Markedness, Correlation and Interpretation, and supplements it with a new theory
of gesture.7
Most pertinent is a valuable article by Janice Dickensheets in which she cites
examples from a broad range of composers, among them Carl Maria von Weber,
Chopin, Schubert, Berlioz, Mendelssohn, Smetana, Grieg, Heinrich Herz, SaintSans, Liszt, Verdi, Brahms, Mahler, and Tchaikovsky. She notes the persistence
into the nineteenth century of some of Ratners topics, including the musical
types minuet, gigue, siciliano, and march, and the musical styles military, hunt,
pastoral, and fantasia; the emergence of new styles and dialects; and the contextual inflection of old topics to give them new meanings. Her lexicon includes the
following, each of which is illustrated with reference to a specific compositional
manifestation:8
5. Ratner, Romantic Music: Sound and Syntax (New York: Schirmer, 1992).
6. Monelle, The Musical Topic, 8594.
7. Robert Hatten, Interpreting Musical Gestures, Topics, and Tropes: Mozart, Beethoven, Schubert
(Bloomington: Indiana University Press, 2004).
8. Janice Dickensheets, Nineteenth-Century Topical Analysis: A Lexicon of Romantic Topoi,
Pendragon Review 2(1) (2003): 519.
46
1.
2.
3.
4.
5.
6.
7.
8.
PART I
Theory
archaizing styles
aria style
bardic style
bolero
Biedermeier style
Chinoiserie
chivalric style
declamatory
style (recitative
style)
9.
10.
11.
12.
13.
14.
15.
16.
demonic style
fairy music
folk style
gypsy music
heroic style
Indianist style
Italian style
lied style or song
style (including lullaby,
17.
18.
19.
20.
21.
22.
23.
24.
Kriegslied, and
Winterlied)
pastoral style
singing style
Spanish style
style hongrois
stile appassionata
tempest style
virtuosic style
waltz (Lndler)
10.
11.
12.
13.
recitativo
lamenting, elegiac
citations
the grandioso,
triumfando (going
back to the heroic
theme)
14. the lugubrious
type deriving at
the same time
from appassionato
and lamentoso
(lagrimoso)
15. the pathetic, which
is the exalted form
of bel canto
16. the pantheistic,
an amplified
variant of either
the pastoral theme
or of the religious
type
CHAPTER 2
47
In the music of Mahler, the essential utterance is heterogeneous at the core, and
although not all aspects of such heterogeneity can be given a topical designation,
many can. The following are some of the topics regularly employed by Mahler:11
1.
2.
3.
4.
5.
6.
nature theme
fanfare
horn call
bird call
chorale
pastorale
7. march (including
funeral march)
8. arioso
9. aria
10. minuet
11. recitative
12. scherzo
13.
14.
15.
16.
17.
18.
bell motif
Totentanz
lament
Lndler
march
folk song
Press, 1992], 115), and in Liszts use of symbols in his Transcendental tudes (see Samson, Virtuosity and the Musical Work: The Transcendental Studies of Liszt [Cambridge: Cambridge University
Press, 2007], 175197). In an unpublished study of Paganinis violin concertos, Patrick Wood highlights a sharply profiled expressive genre which progresses from a march topic to an opposing lyrical topic (such as the singing style) as the frame of the exposition of the first movement. See Wood,
Paganinis Classical Violin concerti (unpublished seminar paper, Princeton University, 2008).
11. Among commentators on Mahlers music, something approaching the notion of topos appears
most explicitly in the writings of Constantin Floros. See, for example, his Gustav Mahler: The Symphonies, trans. Vernon Wicker (Portland, OR: Amadeus, 1993).
48
PART I
Theory
compositions like the Rite of Spring, Pierrot Lunaire, Salome, or Weberns Five
Pieces for String Quartet, op. 5, resistance to the past is, paradoxically, a way of
registering belief in its potency, ultimately of displaying that past even while denying it. A topical approach supports such counternarratives.
The universe of topic has thus undergone a range of transformations from the
eighteenth through the nineteenth and twentieth centuries and into the twentyfirst. To put these developments in a nutshell: in the eighteenth century, topics
were figured as stylized conventions and were generally invoked without pathos
by individual composers, the intention being always to speak a language whose
vocabulary was essentially public without sacrificing any sort of will to originality.
In the nineteenth century, these impulses were retained, but the burgeoning of
expressive possibilities brought other kinds of topic into view. Alongside the easily recognized conventional codes were others that approximated natural shapes
(such as the dynamic curve or high-point scheme that we will discuss shortly) and
some that were used consistently within a single composers oeuvre or idiolect
(Schumanns numerous ciphers and the Florestan, Eusebius, and Raro personifications are cases in point). Twentieth-century topical practice became, in part, a
repository of eighteenth- and nineteenth-century usages even as the universe was
expanded to include the products of various strategic denials. Thus, certain rhetorical gestures associated with Romantic music took on a historical or topical role
in twentieth-century music, while the dynamic curve, which we will discuss under
the rubric high point, was, despite its quintessentially Romantic association, also
found in a variety of twentieth-century repertoires, including electronic music.12
Musics associated with specific groups (Jewish, gypsy) retained their vitality for
quotation and allusion, while newer musical developmentssuch as the AfricanAmerican traditions of blues, gospel, funk, jazz, and rapprovided material for
topical exploration and exploitation by composers.
In an unpublished construction of a topical universe for twentieth-century
music, Danuta Mirka divides topics into three groups. The first (group A) embraces
eighteenth-century dances, the second (group B) lists musics associated with various ethnicities, and the third (group C) is a diverse collection of styles:
Group A
1.
2.
3.
4.
5.
6.
7.
8.
menuet
gavotte
bourre
sarabande
gigue
pavane
passepied
tarantella
9. tango
10. waltz
Group B
11.
12.
13.
14.
Jewish music
Czech music
Polish music
Hungarian
music
15.
16.
17.
18.
Gypsy music
Russian music
Spanish music
Latin-American
music (Brazilian, Argentinean,
Mexican,)
19. Oriental music
(Chinese,
12. See Patrick McCreless, Anatomy of a Gesture: From Davidovsky to Chopin and Back, in
Approaches to Musical Meaning, ed. Byron Almen and Edward Pearsall (Bloomington: Indiana
University Press, 2006), 1140, for a study of this phenomenon.
CHAPTER 2
Japanese,
Indian)
20. North American
country music
Group C
21. Gregorian
chant
22. chorale
49
circus music
barrel organ
lullaby
childrens song
fanfare
military march
funeral march
pastoral style
elegy
machine music
The music of Bartk and Stravinsky lends itself well to topical analysis. In a study
of Bartks orchestral works, Mrta Grabcz has identified 10 recurring topics:13
1. the ideal or the quest for the ideal, expressed through the learned style
2. the grotesque, signaled by a combination of rhythmic practices (mechanical, the waltz), instrumental association (clarinet), and the octatonic collection
3. the image of the hopeless and gesticulating hero, expressed in dissonant,
bitonal contexts with repeated fourths and fifths
4. nature (calm, friendly, or radiant), signaled by the acoustic scale
5. nature (hostile, menacing), conveyed by minor harmonies and the chromatic
scale
6. nocturnal nature, expressed by string timbre, march-like melody, enchanting sonorities
7. elegy, expressed in a static or passive atmosphere
8. perpetuum mobile, manifest in ostinato or motoric movement
9. popular dance, song in a peasant style
10. metamorphosis, restricted to certain moments in the form and characterized by the transcendence or transubstantiation of the last appearance of a
musical idea that has been present in varied form since the beginning
In Stravinskys music, an essentialized parasitical tendency often originates in
a play with, or appropriation of, established topics. At the root of the aesthetic
lies a desire to creatively violate commonplaces or figures burdened with historical or conventional meaning. In their important study Apollonian Clockwork: On
Stravinsky, Louis Andriessen and Elmer Schnberger unveil many of the composers subtexts, thus bringing into aural view the foils and intertexts that form an
essential part of the composers work. Apollonian Clockwork is as much a study of
topic as of anything else. To choose just one example that readers can readily recall:
Lhistoire du soldat, composed in 1918, is a veritable catalog of topical references.
13. Mrta Grabcz, Topos et dramaturgie: Analyse des signifis et de la strategie dans deux movements symphoniques de B. Bartok [sic], Degrs 109110 (2002): j1j18. The article includes a summary of the secondary literature on Bartk that alludes to or deals directly with topics, even where
authors do not use the term.
50
PART I
Theory
To facilitate the narrating of the soldiers tale, Stravinsky draws on four central
topical classes. The first, march, is presented in different guises or expressive registers (soldiers march, royal march, and the devils triumphal march) without ever
losing its intrinsic directionality. The second is dance, of which tango, waltz, and
ragtime serve as conventional exemplars alongside a devils dance. The third is pastorale, complete with airs performed by a stream. The fourth is chorale, manifest
in two sizes, a little chorale that lasts only 8 bars and the grand chorale, which goes
on for some 29 bars, interspersed with the soldiers narration. Within these bigger
topical umbrellas, little topics are invoked: fanfares, drones reminiscent of musette,
and the Dies Irae chant. The play element that guides the disposition of these
materials is equally important, of course, especially as a window onto a discourse
of repetition. But recognizing the Soldiers March as a veritable parade of historical
styles is already a step in the right direction.14
To hear Romantic and post-Romantic music topically, then, is to hear it as a
repository of historically situated conventional styles that make possible a number
of dialogues. The examples mentioned hereby Mozart, Beethoven, Schumann,
Liszt, Mahler, and Stravinskyare only an indication of what is a vast and complex
universe. Identifying topics, however, is only the first stage of analysis; interpretation must follow. Interpretation can be confined to meanings set in motion within
a piece or include those that are made possible in intertextual space. The analyst
might assess the work done by individual topics in a composition and, if s/he so
desires, fashion a narrative that reflects their disposition. In some contexts, a plot
will emerge for the individual composition; in others, topics will be absorbed into
a larger expressive genre15 or indeed a commanding structural trajectory; in still
others, fragments will retain their identities on the deepest levels, refusing absorption into or colonization by an archetypal, unified plan. In practice, identifying
topics can produce relatively stable results; interpreting topics, by contrast, often
turns up diverse plots. Whereas identification entails a discovery of familiar or
relatively objective configurations, interpretation is the exercise of an imaginative
willa fantasy fueled by the analysts capacity for speculation. There are no firm
archetypes upon which to hang an interpretation of the plots arising from topical
succession. The results of identification differ from composition to composition.
A topical analysis confirms the uniqueness of a given composition while also making possible a comparison of material content that might enable an assessment of
affinity among groups of compositions.16
14. Debussys Prelude for Piano, Minstrels, makes a fascinating case study in topical expression. Within
its basic scherzo-like manner, it manages to incorporate diverse allusions to other musical styles. One
should perhaps distinguish between topical use and the kinds of deliberate quotation or allusion studied by Christopher Reynolds in Motives for Allusion: Context and Content in Nineteenth-Century Music
(Cambridge, MA: Harvard University Press, 2003); and David Metzer in Quotation and Cultural Meaning in Twentieth-Century Music (Cambridge: Cambridge University Press, 2003). Among twentiethcentury composers whose music strongly invites topical treatment, Kurt Weill ranks high.
15. Hatten, Musical Meaning in Beethoven, 6790.
16. For a recent assessment of topic theory, see Nicholas Peter McKay, On Topics Today, Zeitschrift
der Gesellschaft fr Musiktheorie 4 (2007). http:/.
Accessed August 12, 2008.
CHAPTER 2
51
17. Johann Mattheson, Der vollkommene Capellmeister, trans. Ernest Harriss (Ann Arbor, MI: UMI
Research Press, 1981; orig. 1739).
18. Heinrich Christoph Koch, Versuch einer Anleitung zur Composition, vols. 2 and 3 (Leipzig: Bhme,
1787 and 1793).
19. Schenker, Free Composition, 129.
20. Carl Dahlhaus, Between Romanticism and Modernism, trans. Mary Whittall (Berkeley: University
of California Press, 1980), 64.
21. Agawu, Playing with Signs, 5179.
22. William E. Caplin, Classical Form: A Theory of Formal Functions for the Instrumental Music of
Haydn, Mozart and Beethoven (Oxford: Oxford University Press, 2000), 35 and 24.
23. Craig Ayrey, Review of Playing with Signs, Times Higher Education Supplement 3 (May 1991), 7.
52
PART I
Theory
fact that, as a set of qualities, beginnings, middles, and endings are not located
in a single musical dimension but cut across various dimensions. In other words,
interpreting a moment as a beginning or an ending invariably involves a reading
of a combination of rhythmic, melodic, and harmonic factors as they operate in
specific contexts. In an institutional climate in which analysts tend to work within
dimensions as specialists, theories that demand an interdimensional approach
from the beginning seem to pose special challenges. These difficulties are, however, not insurmountable, and it will be part of my purpose here to suggest ways
in which attending to beginnings, middles, and endings can enrich our perception
of Romantic music.
For many listeners, the impression of form is mediated by beginning, middle, and
ending functions. Tchaikovskys First Piano Concerto opens with a powerful beginning gesture that, according to Edward T. Cone, dwarfs the rest of what followsa
disproportionately elaborate opening gesture that sets the introduction off as an
overdeveloped frame that fails to integrate itself with the rest of the movement.24
Some openings, by contrast, proceed as if they were in the middle of a process previously begun; such openings presuppose a beginning even while replacing it with
a middle. Charles Rosen cites the long dominant pedal that opens Schumanns Fantasy in C Major for Piano, op. 17, as an example of a beginning in medias res.25 And
an ending like that of the finale of Beethovens Fifth, with its plentiful reiteration of
the tonic chord, breeds excess; strategically, it employs a technique that might be
figured as rhetorically infantile to ensure that no listener misses the fact of ending.
Ending here is, however, not merely a necessary part of the structure; it becomes a
subject for discussion as wella meta-ending, if you like.26
As soon as we begin to cite individual works, many readers will, I believe, find
that they have a rich and complex set of associations with beginnings, middles,
and endings. Indeed, some of the metaphors employed by critics underscore the
importance of these functions. Lewis Rowell has surveyed a variety of beginning
strategies in music and described them in terms of birth, emergence, origins, primal cries, and growth.27 Endings, similarly, have elicited metaphors associated with
rest and finality, with loss and completion, with consummation and transfiguration, with the cessation of motion and the end of life, and ultimately with death
and dying. No more, we might say at the end of Tristan and Isolde.
How might we redefine the beginning-middle-ending model for internal
analytic purposes? How might we formulate its technical processes to enable
exploration of Romantic music? Every bound temporal process displays a beginning-middle-ending structure. The model works at two distinct levels. First is the
pure material or acoustical level. Here, beginning is understood ontologically as that
which inaugurates the set of constituent events, ending as that which demarcates
24. Cone, Musical Form and Musical Performance (New York: Norton, 1968) 22.
25. Rosen, The Classical Style, 452453.
26. Donald Francis Tovey comments on the appropriateness of this ending in A Musician Talks, vol. 2:
Musical Textures (Oxford: Oxford University Press, 1941), 64.
27. Lewis Rowell, The Creation of Audible Time, in The Study of Time, vol. 4, ed. J. T. Fraser, N.
Lawrence, and D. Park (New York: Springer, 1981), 198210.
CHAPTER 2
53
the completion of the structure, and middle as the necessary link between beginning and ending. At this level, the analyst is concerned primarily with sound and
succession, with the physical location of events.
There is a second, more qualitative level at which events (no longer mere
sounds) are understood as displaying tendencies associated with beginnings, middles, and endings. These functions are based in part on convention and in part on logic.
A beginning in this understanding is an event (or set of events) that enacts the normative function of beginning. It is not necessarily what one hears at the beginning
(although it frequently is that) but what defines a structure qualitatively as a beginning. A middle is an event (or set of events) that prolongs the space between the
end of the beginning and the beginning of the ending. It refuses the constructive
profiles of initiation and peroration and embraces delay and deferral as core rhetorical strategies. Finally, an ending is an event (or set of events) that performs the
functions associated with closing off the structure. Typically, a cadence or cadential
gesture serves this purpose. An ending is not necessarily the last thing we hear in
a composition; it may occur well before the last thing we hear and be followed by
rhetorical confirmation. The task of an ending is to provide a decisive completion
of structural processes associated with the beginning and middle.
The first level of understanding, then, embodies the actual, material unfolding
of the work and interprets the beginning-middle-ending model as a set of place
marks; this is a locational or ordinal function. The second speaks to structural
function within the unfolding. Distinguishing between location and function has
important implications for analysis. In particular, it directs the listener to some
of the creative ways in which composers play upon listeners expectations. For
example, a locational opening, although chronologically prior, may display functions associated with a middle (as in off-tonic beginnings, or works that open with
auxiliary cadences) or an ending (as in works that begin with cadences or with a
2
1 or 5
4
3
2
1 melodic progression). Location and function would thus be
3
nonaligned, creating a dissonance between the dimensions. Similarly, in a locational ending, the reiterative tendencies that index stability and closure may be
replaced by an openness that refuses the drive to cadence, thus creating a sense of
middle, perhaps an equivocal ending. Creative play of this kind is known in connection with classic music, whose trim procedures and firmly etched conventions
have the great advantage of sharpening our perception of any creative departures
that a composer might introduce. It is also frequently enacted by Romantic composers within their individual and peculiar idiolects.
Although all three locations are necessary in defining a structure, associated
functions may or may not align with the locations. It is also possiblefunctionally
speakingto lose one element of the model by, for example, deploying a locational ending without a sense of ending. It would seem, in fact, that beginnings and
endings, because they in principle extend in time and thus function as potential
colonizers of the space we call middle, are the more critical rhetorical elements of
the model. In certain contexts, it is possible to redefine Aristotles model with no
reference to middles: a beginning ends where the ending begins. It is possible also
to show that, in their material expression, beginnings and endings frequently draw
on similar strategies. The stability or well-formedness needed to create a point of
54
PART I
Theory
reference at the beginning of a musical journey shares the material formsbut not
necessarily the rhetorical presentationof a comparable stability that is needed to
ground a dynamic and evolving structure at its end. It is also possible that endings,
because they close off the structure, subtend an indispensable function. From this
point of view, if we had to choose only one of the three functions, it would be ending. In any case, several of these functional permutations will have to be worked
out in individual analyses.
It is not hard to imagine the kinds of technical processes that might be associated with beginnings, middles, and endings. Techniques associated with each of a
works dimensionsharmony, melody, rhythm, texturecould be defined normatively and then adapted to individual contexts. With regard to harmony, for example, we might say that a beginning expresses a prolonged IV(I) motion. (I have
placed the closing I in parenthesis to suggest that it may or may not occur, or that,
when it does, its hierarchic weight may be significantly less than that of the initiating I.) But since the beginning is a component within a larger, continuous structure, the IV(I) progression is often nested in a larger IV progression to confer
prospect and potential, to ensure its ongoing quality. A middle in harmonic terms
is the literal absence of the tonic. This often entails a prolongation of V. Since such
prolonged dominants often point forward to a moment of resolution, the middle is
better understood in terms of absence and promise: absence of the stable tonic and
presence of a dependent dominant that indexes a subsequent tonic. An ending in
harmonic terms is an expanded cadence, the complement of the beginning. If the
larger gesture of beginning is represented as IV, then the reciprocal ending gesture
is VI. The ending fulfills the harmonic obligation exposed in the beginning, but
not under deterministic pressure. As with the beginning and ending of the beginning, or of the middle, the location of the beginning and ending functions of the
ending may or may not be straightforward. In some genres, endings are signaled by
a clearly marked thematic or tonal return or by a great deal of fanfare. In others, we
sense the ending only in retrospect; no grand activity marks the moment of death.
Similar attributions can be given for other musical dimensions. In doing so, we
should remember that, if composition is figured essentially as a mode of play, what
we call norms and conventions are functional both in enactment and in violation.
On the thematic front, for example, we might postulate the imperatives of clear statement or definition at the beginning, fragmentation in the middle, and a restoration
of statement at the ending, together with epigonic gestures or effects of reminiscence.
In terms of phrase, we might postulate a similar plot: clarity (in the establishment of
premises) followed by less clarity (in the creative manipulation of those premises)
yields, finally, to a simulated clarity at the end. In addition to such structural procedures, we will need to take into account individual composerly routines in the choreographing of beginnings and endings. Beethovens marked trajectories, Schuberts
way with extensive parentheses and deferred closure, Mendelssohns delicately balanced proportions, and the lyrical inflection of moments announcing home-going
in Brahmsthese are attitudes that might be fruitfully explored under the aegis of a
beginning-middle-ending scheme. We have space here for only one composer.
As an example of the kinds of insights that might emerge from regarding a
Romantic work as a succession of beginnings, middles, and endings on different
CHAPTER 2
55
levels, I turn to Mendelssohns Song without Words in D major, op. 85, no. 4 (reproduced in its entirety as example 2.1). The choice of Mendelssohn is not accidental,
for one of the widely admired features of his music is its lucidity. In the collection
Example 2.1. Mendelssohn, Song without Words in D major, op. 85, no. 4.
Andante sostenuto.
sf
sf
sf
10
cresc.
13
cresc.
16
19
pi f
(continued)
56
PART I
Theory
sf
cresc.
25
cresc.
dim.
28
cresc.
dim.
31
34
of songs without words, each individual song typically has one central idea that
is delivered with a precise, superbly modulated, and well-etched profile. The compositional idea is often affectingly delivered. And one reason for the composers
uncanny success in this area is an unparalleled understanding of the potentials of
beginning, middle, and ending in miniatures. I suggest that the reader play through
this song at the piano before reading the following analytical comments.
We might as well begin with the ending. Suppose we locate a sense of homegoing beginning in the second half of bar 26. Why there? Because the rising minor
seventh in the melody is the first intervallic event of such magnitude in the composition; it represents a marked, superlative moment. If we follow the course of the
melody leading up to that moment, we hear a physical rise in contour (starting on
F-sharp in 24) combined with an expansion of intervals as we approach the high G
in bar 26. Specifically, starting from the last three eighth-notes in bar 25, we hear, in
succession, a rising fourth (AD), a rising sixth (GE), and finally a rising seventh
CHAPTER 2
57
(AG). Then, too, this moment is roughly two-thirds of the way through the song,
is underlined by an implicative 6/5 harmony that seeks resolution, and represents
the culmination of a crescendo that has been building in the preceding 2 bars. The
moment may be figured by analogy to an exclamation, an expected exclamation
perhaps. It also marks a turning point, the most decisive turning point in the form.
Its superlative quality is not known only in retrospect. From the beginning, Mendelssohn, here as in other songs without words, crafts a listener-friendly message
in the form of a series of complementary gestures. Melody leads (that is, functions
as a Hauptstimme); harmony supports, underlines, and enhances the progress of
the melody; and the phrase structure regulates the temporal process while remaining faithful in alignment. The accumulation of these dimensional behaviors prepares bar 26. Although full confirmation of the significance of this moment will
come only in retrospect, the balance between the prospective and retrospective,
here as elsewhere in Mendelssohn, is striking. Luminous, direct, natural, and perhaps unproblematic (as we might say today), op. 85, no. 4 exemplifies carefully
controlled temporal profiling.
Ultimately, the sense of ending that we are constructing cannot be understood
with respect to a single moment, for that moment is itself a product of a number
of preparatory processes. Consider bar 20 as the beginning of the ending. Why bar
20? Because the beautiful opening melody from bar 2 returns at this point after
some extraneous, intervening material (bars 1219). For a work of these modest
dimensions, such a large-scale return readily suggests a reciprocal sense of closure
within a tripartite formal gesture.
If we continue to move back in the piece, we can interpret the passage beginning in bar 12 as contrast to, as well as intensification of, the preceding 11 bars. Note
the quasi-sequential process that begins with the upbeat to bar 12. Phrase-wise, the
music proceeds at first in 2-bar units (114133, 134153; these and subsequent designations of phrase boundaries in this paragraph all include an eighth-note prefix),
then continues in 1-bar units in the manner of a stretto (154163 and 164173), and
finally concludes with 2 relatively neutral barsneutral in the sense of declining
a clear and repeated phrase articulationof transition back to the opening theme
(174193).28 The moment of thematic return on the downbeat of bar 20 is supported
not by tonic harmony as in bar 2 but by the previously tonicized mediant, thus
conferring a more fluid quality on the moment and slightly disguising the sense of
return. The entire passage of bars 1219 features rhetorically heightened activity
that ceases with the thematic return in bar 20. If, in contrast to the earlier hearing,
the passage from bar 20 to the end is heard as initiating a closing section at the largest level of the form, then bars 1219 may be heard as a functional middle.
Finally, we can interpret the opening 11 bars as establishing the songs premises, including its material and procedures. A 1-bar introduction is followed by a
4-bar phrase (bars 25). Then, as if repeating (bar 6), the phrase is modified (bar
7) and led through B minor to a new tonal destination, F-sharp minor (bars 893).
28. Bars 174182 begin in the manner of the previous 1-bar units but modify their end in order to lead
elsewhere.
58
PART I
Theory
but 325 (FEA not FED), the 1 sounding in an inner voice so that the less
conclusive melodic 5 can initiate a second attempt at closure. The local harmony at
283291 is not V6/45/3I (with the second and third chords in root position) but
the more mobile V6/4V4/2I6.29 Part of Mendelssohns strategy here is to embed
the more obvious gestures of closure within a larger descending-bass pattern that
will lend a sense of continuity to the closing moment. This line starts with bass A
on the third beat of 28, passes through G (also in 28) then falls through F-sharp,
F-natural, and E before reaching a mobile D on the downbeat of 30, making room
for an intervening A at 294. A similarly directed bass line preceded this one and
served to prepare the high point of bar 26. We can trace it from the third beat of
bar 23: DCB (bar 23), AAGF (bar 24), then, transferred up the octave,
EDCB (bar 25), and finally A (downbeat of 26), the whole spanning an octave
and a half.
Unlike the attempt at closure in bars 2829, the one in bars 3132 reaches its
2
1 over a VI offers what was previously denied.
destination. A conventional 3
Many listeners will hear the downbeat of bar 32 as a defining moment, a longed-for
moment, perhaps, and, in this context, the place where various narrative strands
meet. Schenker would call this the definitive close of the composition;30 it marks
the completion of the works subsurface structural activity. Syntactic closure is
achieved. We might as well go home at this point.
But syntactic closure is only one aspectalbeit an important oneof the full
closing act. There is also a complementary dimension that would secure the rhetorical sense of the close, for although we have attained 1 over I, we need to savor
D for a while, to repose in it, to dissolve the many tensions accumulated in the
course of the song. This other dimension of closure can be described in different
ways: as rhetorical, as gestural, or even as phenomenal. In this song without words,
Mendelssohn writes a codetta-like segment (bars 32end) to meet this need. These
last 6 bars are a tonic prolongation. We sense dying embers, a sense of tranquility,
the serenity of homecoming, even an afterglow. We may also hear in them a sense
of reminiscence, for the sense that death is approaching can be an invitation to
29. Here and elsewhere, I follow Schenkerian practice in understanding cadential 6/4s as dominantfunctioning chords featuring a double suspension to the adjacent root-position dominant chord.
Hence the symbol V6/45/3.
30. Schenker, Free Composition, 129.
CHAPTER 2
59
relive the past in compressed form. It is as if key moments in the form are made
to flash before our very eyes, not markedly as quotations, but gently and subtly, as
if in a mist, as if from a distance. One of the prominent elements in this ending is
6
5,
which was adumbrated in the very
a simple neighbor-note motive, ABA or 5
first bar of the song, where B served as the only non-chord tone within the tonic
expression. Subsequently, the notes B and A were associated in various contexts.
Then, in bars 3233, the ABA figure, now sounding almost like a wail, presses the
melodic tone A into our memories. The V6/5 harmony in the second half of bars
32 and 33 may also remind us of the high point in bar 26. Then, too, we experi 4
3
2
1 descent across bars 3335. This collection of
ence a touchingly direct 5
scale degrees was introverted in bars 2133, sung in V but without 4 in bars 1011,
introverted again in bars 201213, embedded in bars 2829, heard with 5 playing
only an ornamental role in bars 3132, before appearing in its most direct and
pristine form in bars 324353. Even the dotted-note anacrusis at bar 324 has some
precedent in bars 1112, where it energized the first major contrasting section in
the song. And the extension of the right hand into the highest register of the piece
in the penultimate bar recalls salient moments of intensification around bars 16
and 17 and of the high point in bar 26 and its echo in 29. These registral extensions
afford us a view of another world. Overall, then, the last 6 bars of Mendelssohns
song make possible a series of narratives about the compositional dynamic, among
which narratives of closure are perhaps most significant.
We began this analysis of Mendelssohns op. 85, no. 4, by locating the beginning
of the ending in bar 26; we then worked our way backward from it. But what if we
begin at the beginning and follow the course of events to the end? Obviously, the
two accounts will not be wholly different, but the accumulation of expectations
will receive greater emphasis. As an indication of these revised priorities and so
as to fill in some of the detail excluded from the discussion so far, let us comment
(again) on the first half of the song (bars 119). Bar 1 functions as a gestural prelude to the beginning proper; it familiarizes us with the sound and figuration of
the tonic, while also coming to melodic rest on the pitch A as potential head tone.
The narrative proper begins in bar 2 with a 4-bar melody. We are led eventually to
the end of the beginning in bar 11, where the dominant is tonicized. Mendelssohns
procedure here (as also frequently happens in Brahms, for example, in the song
Wie Melodien zieht es mir, op. 105) is to begin with a head theme or motif and
lead it to different tonal destinations. In the first 4-bar segment (bars 25), the
harmonic outline is a straightforward IV. A second 4-bar segment begins in bar
6, passes through the submediant in 78, and closes in the mediant in bar 9. But,
as mentioned before, the emphatic upbeat to bar 10, complete with a Vii6/5 of V
(thinking in terms of A major), has the effect of correcting this wrong destination. If one is looking to locate the end of the beginning, one might assign it to the
emphatic cadence on the dominant in bar 11. Yet, the end of the beginning and the
beginning of the middle are often indistinguishable. The exploratory potential signaled by A-sharp in bar 7, the first nondiatonic pitch in the song, confers a gradual
sense of middle on bars 711. This sense is intensified in a more conventional way
beginning with the upbeat to bar 12. From here until bar 20, the music moves
in five waves of increasing intensity that confirm the instability associated with a
60
PART I
Theory
middle. Example 2.2 summarizes the five waves. As can be seen, the melodic profile
is a gradual ascent to A, reached in wave 4. Wave 3 is interrupted in almost stretto
fashion by wave 4. Wave 5 begins as a further intensification of waves 3 and 4 but
declines the invitation to exceed the high point on A reached in wave 4, preferring
G-sharp (a half step lower than the previous A) as it effects a return from what, in
retrospect, we understand as the point of greatest intensity. Wave 5 also adopts the
contour of waves 1 and 2, thus gaining a local reprise or symmetrical function. It
emerges that the tonicized mediant in bar 9 was premature; the mature mediant
occurs in bars 1920.
Example 2.2. Five waves of action across bars 1220 in Mendelssohn, Song without Words in D major, op. 85, no. 4.
12
wave 1
14
wave 2
16
wave 3
17
wave 4
18
wave 5
Stepping back from the detail of Mendelssohns op. 85, no. 4, we see that the
beginning-middle-ending model allows us to pass through a Romantic composition by weighing its events relationally and thus apprehending its discourse. The
model recognizes event sequences and tracks the tendency of the musical material.
In this sense, it has the potential to enrich our understanding of what musicians
normally refer to as forma complex, summary quality that reflects the particular
constellation of elements within a composition. There is no mechanical way to
apply a beginning-middle-ending model; every interpretation is based on a reading of musical detail. Interpretations may shift depending on where a beginning
is located, what one takes to be a sign of ending, and so on. And while the general
features of these functions have been summarized and in part exemplified in the
Mendelssohn analysis, the fact that they are born of convention means that some
aspects of the functions may have escaped our notice. Still, attention to musical
rhetoric as conveyed in harmony, melody, phrase structure, and rhythm can prove
enlightening.
The beginning-middle-ending model may seem banal, theoretically coarse,
or simply unsophisticated; it may lack the predictive power of analytical theories
that are more methodologically explicit. Yet, there is, it seems to me, some wisdom in resisting the overdetermined prescriptions of standard forms. This model
CHAPTER 2
61
substitutes a set of direct functions that can enable an individual analyst to get
inside a composition and listen for closing tendencies. Musicology has for a long
time propagated standard forms (sonata, rondo, ternary, and a host of others) not
because they have been shown to mediate our listening in any fundamental way,
but because they can be diagrammed, given a two-dimensional visual appearance,
and thus easily be represented on screens and blackboards and in books, articles,
and term papers. A user of the beginning-middle-ending model, by contrast,
understands the a priori functions of a sonata exposition as mere designation;
a proper analysis would inspect the work afresh for the complex of functions
many of them of contradictory tendencythat define the activity within, say, the
exposition space. To say that a dialogue is invariably set up between the normative
functions in a sonata form and the procedures on the ground, so to speak, is an
improvement, but even this formulation may overvalue the conventional sense of
normative functions. Analysis must deal with the true nature of the material and
recognize the signifying potential of a works building blocksin short, respond
to the internal logic of the work, not the designated logic associated with external
convention. Reorienting thinking and hearing in this way may make us freshly
aware of the complex dynamism of musical material and enhance our appreciation of music as discourse.
High Points
A special place should be reserved for high points or climaxes as embodiments of
an aspect of syntax and rhetoric in Romantic musical discourse. A high point is a
superlative moment. It may be a moment of greatest intensity, a point of extreme
tension, or the site of a decisive release of tension. It usually marks a turning point
in the form (as we saw in bar 26 of example 2.1). Psychologically, a single high point
typically dominates a single composition, but given the fact that a larger whole
is often constituted by smaller parts, each of which might have its own intensity
curve, the global high point may be understood as a product of successive local
high points. Because of its marked character, the high point may last a moment,
but it may also be represented as an extended momenta plateau or region.
No one performing any of the diverse Romantic repertoires can claim innocence of high points. They abound in opera arias; as high notes, they are sites of
display, channels for the foregrounding of the very act of performing. As such,
they are thrilling to audiences, whose consumption of these arias may owe not a
little to the anticipated pleasure of experiencing these moments in different voices,
so to speak. The lied singer encounters them frequently, too, often in a more intimate setting in which they are negotiated with nuance. In orchestral music, high
points often provide some of the most memorable experiences for listeners, serving as points of focus or demarcation, places to indulge sheer visceral pleasure.
Indeed, the phenomenon is so basic, and yet so little studied by music theorists,
that one is inclined to think either that it resists explanation or that it raises no
62
PART I
Theory
31. The most comprehensive early study of high points is George Muns, Climax in Music (Ph.D. diss.,
University of North Carolina, 1955). Leonard Meyers Exploiting Limits introduces an important
distinction between statistical climaxes and syntactical ones. See also Agawu, Structural Highpoints in Schumanns Dichterliebe, Music Analysis 3(2) (1984): 159180. Most important among
more recent studies is Zohar Eitans Highpoints: A Study of Melodic Peaks (Philadelphia: University
of Pennsylvania Press, 1997), which may be read in conjunction with David Hurons review in
Music Perception 16(2) (1999): 257264.
CHAPTER 2
63
are many examples to support this theory, but there are counterexamples as well.
It would seem that the nineteenth century evinces a plural set of practices. Some
high points are syntactical while others are statistical.32
The basic model of the dynamic curve may, of course, be subject to variation.
A high point may occur earlier rather than later in the form. It may appear with
relatively little preparation and perhaps be followed by prolonged decline. It may
be known more in retrospect than in prospect; that is, while some high points
are clearly the culmination of explicit preparatory processes, others pass into consciousness only after the fact. These creative manipulations bespeak a simultaneous
interest in natural shapes and the artifice of artistic transformation.
Let us follow the achievement of high points in some well-known moments.
In Schuberts glorious An die Musik, the high point occurs toward the end of
the first strophe on a high F-sharp (Welt) in bar 16. The strophe itself is nearly
20 bars long, so the high point occurs closer to its end, not in the middle or at
the beginning. How does Schubert construct this moment as a turning point?
The structural means are simple, and the timing of their disposition is impeccable. From the beginning, Schubert maintains a relatively consistent distinction among three types of pitch configuration: arpeggio, stepwise diatonic, and
chromatic. If we think of these as modes of utterance, we see that they work in
tandem to create the high point in bar 16. First, the pianist offers the singer an
arpeggiated figuration (lh, bars 12). She accepts (bars 34) but only for a limited period; the imperatives of musical closure favor stepwise melodic motion
(bars 56). The pianist repeats his triadic offer (lh, bars 67) and, while the
singer responds, continues in stepwise mode (lh, bars 89). Meanwhile, the
singers response incorporates the single largest melodic leap in the entire song
(descending minor-seventh in bars 78). But this change of direction is merely
the product of an octave transference; if we rewrite Schuberts melody in these
bars (79) within a single octave, we see that the diatonic stepwise mode is what
regulates this second utterance. Now, the pianist presses forward in chromatic
mode (lh, bars 1011). This is not the first time that chromatic elements have
been used by the pianist (GA in lh bars 45 and AAB in lh bar 8), but
the utterance in bar 10 is more decisive and carries an aura of intensification.
Here, at the start of the singers third vocal phrase (bar 11), she does not respond
directly to what is offered by the pianist but is led by the momentum of her own
previous utterances. The mixture of arpeggiated and stepwise motion regulates
this phrase. Then, in the final sweep of the phrase (bar 14), the pianist gives full
rein to the chromatic mode, yielding only to an implicit triad at the conclusion
of the phrase (AD in lh bars 1819). The singers articulation of the high point
takes the form of an extended stepwise diatonic rise from A to F-sharp (bars
144163). Observe, however, the gap that Schubert introduces in the approach to
the climactic pitch: DF, not EF. The rhetorical effect of this gap of a third is
underlined by the local harmonic situation: a secondary dominant in 6/5 position
32. Meyer, Exploiting Limits. See also his later volume Style and Music: Theory, History, and Ideology
(Philadelphia: University of Pennsylvania Press, 1989).
64
PART I
Theory
Piano
hol - de
hat ein
Kunst,
Seuf
in
zer,
pp
Stun - den,
flos - sen,
Le - bens wil
hei - li - ger
Wo mich des
Ein s - er,
11
Hast du
Den Him
mein Herz
mel be
zu
rer
Hast
Du
mich
hol -
in ei - ne
de Kunst, ich
16
be
dan
re Welt
ke dir
en - trckt!
da - fr!
In ei - ne
Du hol - de
be - re Welt
Kunst, ich dan -
en - trckt!
ke dir!
21
1.
2.
supports the high F-sharp in bar 16. Then comes release in a configuration that
mixes stepwise with triadic motion (bars 1719). Note that the high point in bar
16 is not the only occurrence of that particular F-sharp in the song. We heard
it three bars earlier (bar 13), but without accentual or harmonic markedness.
CHAPTER 2
65
mf
simile
cresc.
15
(ff)
stretto
3
5
22
(dim.)
28
(pp)
66
PART I
Theory
The rhetorical shape of the first of Chopins preludes, op. 28, is perfection itself
(example 2.4). An 8-bar antecedent closes in a half-cadence. (The fact that the harmony in bar 8 is a dominant-seventh rather than a dominant inflects but does not
erase the sense of a half-cadence.) Then, Chopin begins a repetition of those 8 bars.
After the fourth, he intensifies the procedure. Melody now incorporates chromaticism, enlists the cooperation of the bass (parallel movement between bass and treble), adopts a stretto mode so as to intensify the sense of urgency in the moment,
and eventually culminates in a high point on DC (bar 21)the highest melodic
pitch in the prelude. From there, things are gradually brought to a close. The mel
ody returns from on high and approaches 1teasingly
at first, eventually attaining
rest in bar 29. The entire passage after the high point features a diminuendo, and
the last 10 bars sit on a tonic pedal, C. The expression is archetypically Romantic:
the means are clear but subtle, the rhetoric self-evident but never banal, the effect
touching, and there is no unmotivated lingering. To finish, Chopin reminds us that
there is more to come. The attainment of 1 in bar 29 did not do it; a terminal 3 (bar
34) leaves things suspended, adding a touch of poetry to the ending.
It seems likely that the high point in bar 21 shapes the experience of many players and listeners. Bar 21 is a turning point. The intensifying stretto effect is abandoned here; the consistently rising chromatic melody, doubled an octave below
(bars 1620), is overcome by a diatonic moment (bar 21); and the complementary
melodic descent from the high point is entirely diatonic. The only vestige of chromaticism is in the region of the high point (bar 22). The rest is white notes.
Chopin takes time to close. This is no routine deployment of conventional syn 6 and 3
2 which provided the essential
tax to close off a structure. The motifs 5
melodic opposition in bars 13 and 57, respectively, are briefly restored (bars
2526 and 2728) in a gesture laden with reminiscence. We reminisce as the end
nears. When the longed-for 1 finally appears in bar 29, it is cuddled by a fourfold
6/45/3 double suspension (bars 2932).
Chopins op. 28 collection as a whole is a rich site for the study of high points. In
no. 3 in G major, for example, the beginning of the end is marked not by a tensional
high point but by a turn to the subdominant (bars 1619), a reorientation of the
harmonic trajectory. The deeply expressive, minor-mode no. 4 is made up of two
large phrases, an antecedent (bars 112) and its consequent (bars 1325). The high
point occurs in the middle of the consequent (bars 1617), complete with Chopins
stretto marking, forte dynamic, and momentary contact with a low-lying Bthe
lowest note in the piece so far, to be superseded only in the final bar by an E. In no.
6 in B minor, the point of furthest remove is the Neapolitan region in bars 1214,
positioned about halfway through the work. This relatively early turning point is
followed by an especially prolonged period of closure (bars 1526). The little A
Major Prelude, no. 7, marks its high point by a secondary dominant to the supertonic (bar 12). Delivered in two symmetrical phrases (bars 18 and 916), the high
point forms part of the precadential material leading to the final cadence. In no. 9
in E major, an enharmonic reinterpretation of the mediant harmony (A-flat in bar
8) conveys the sense of a high point. In no. 13 in F-sharp major, an E-natural functioning as a flattened-seventh of the tonic chord (bar 29) signifies home-going and
serves as a critical turning point in the form. And in the dramatic Prelude no. 22
CHAPTER 2
67
in G Minor, a modified repeat of the opening, bass-led period (bars 18, 916)
is followed by a still more intense phrase promising closure (bars 1724) and its
immediate repetition (bars 2532). Finally, the cadential gesture of bars 3132 is
repeated as 3334 and overlapped with what appears to be another statement of the
opening theme. The bass gets stuck in bars 3638, however, and it needs the (divine)
intervention of an inverted augmented sixth chord in bar 40 to usher in the final
cadence. Chopins trajectory here is one of increasing intensity until a colossal or
even catastrophic event (bar 39) arrests the motion and closes the structure.33
In the Prelude to Tristan and Isolde, successive waves of motion culminate in
bar 83 with a chord containing the notes A-flat, E-flat, C-flat, and F. This moment
marks the decisive turning point in the prelude. What follows is a return to the
opening, a recapitulation of sorts that completes the larger tripartite shape. Volumes of commentary attest to the fact that this famous work can support a variety of analytical agendas. Our concern here is with the simplest and most direct
apprehension of the overall shape of the prelude. With bar 83 as anchor, we can
understand the preparatory tendencies manifest in the preceding 82 bars as well as
the complementary function of the succeeding 28 bars. Just as Schuberts An die
Musik and Chopins C Major Prelude, op. 28, no. 1, rose to a melodic high point
and resolved from there, so, on a grander scale, Wagners prelude rises in waves to
a high point and resolves from there.
The means are direct and ancient. Bar 83 is the loudest moment in the prelude.
The progress of the dynamics conspires to convey that fact. This bar is also one of
the densest. Earlier points, like bars 55 and 74, prepared this one, but the superlative effect here derives from its terminal position. After the explosion in bar 83,
nothing comparable happens in the prelude, whereas with the previous moments
of intensification, there was always the promise of more. Psychologically, bar 83
denies the possibility of a greater moment of intensity.
These features are on the surface of the surface and are immediately noticeable. But there are others. The chord in bar 83 is known to us from the very first
chord in the prelude, the Tristan chord itself (example 2.5). However one interprets it, its function as a (relative) dissonance within the opening 3-bar phrase is
uncontested. Of course, from a certain point of view, every one of the resulting
sonorities in bars 23 is a dissonance, but there is also a sense that the closing
dominant-seventh (bar 3) provides a measure of resolution, albeit a local and
provisional one. In other words, the Tristan chord marks a point of high tension
which is resolvedat least in partby the dominant-seventh chord. It is true, as
Boretz and others have reminded us, that the Tristan chord and the dominantseventh are equivalent within the systems of relation that assert inversional and
transpositional equivalence.34 But even if we devised a narrative that has the
33. For more on closure in the Chopin preludes, see Agawu, Concepts of Closure and Chopins op. 28,
Music Theory Spectrum 9 (1987): 117.
34. It now emerges, writes Benjamin Boretz, that the notoriously ambiguous Tristan chord, so elusive
or anomalous in most tonal explications of the piece, and the familiar dominant seventh, so crucial
to these same tonal explications, are here just exact, balanced, simple inverses of one another, with
very little local evidence to support their consideration as anything but equivalents in this sense.
See Boretz, Metavariations, Part 4: Analytic Fallout, Perspectives of New Music 11 (1973): 162.
68
PART I
Theory
Tristan chord progressing to another version of itself in bars 23, the actual
path of the progression would be understood in terms of an expressive trajectory that confers a sense of lesser tension on an element by virtue of its terminal
position.
Example 2.5. Wagner, Prelude to Tristan and Isolde, bars 13.
Langsam und schmachtend
pp
The high point in bar 83 therefore reproduces a sound that has been part
of the vocabulary of the work from the beginning. In its local context, however, the chord has a precise harmonic function: it is a half-diminished-seventh
chord on the supertonic in the local key of E-flat minor. The main key of the
prelude is A minor (with an intervening major inflection and excursions to
other keys). E-flat minor is at a significant distance from A. Heard in terms of
the prescribed distances that regulate a construct such as the circle of fifths,
E-flat, prepared mainly by its dominant, B-flat, is the point of greatest harmonic
remove from A. The preludes high point is thus, among other processes noted
earlier, a product of subsurface activity that exploits the conventional property
of harmonic distance.35
Such coincidence between expressive and structural domains is the exception rather than the rule when it comes to the articulation of high points. Subsurface activity, confined by an explicit system of theoretical relations, has to
be domesticated in particular ways in order to do expressive work. Often the
rhythm of the system of relations has little or nothing to do with the works
unfolding rhythm, even at a macro level. Systems are based on atemporal logical relations, while works unfold temporally in simulation of organic life. And
this circumstance may encourage some listeners to doubt the pertinence of the
coincidence we have just identified in the Tristan Prelude, whereby the Tristan
chord and the dominant-seventh chord are held to be equivalent. Another way
of putting this is to say that structural procedures are up for expressive grabs.
One who wishes to argue a difference between bar 2 and bar 83 will point
to differences of notation and destination; one who wishes to argue a sameness will invoke enharmonic synonymity. To admit this openness in interpretation is not to suggest any kind of hopelessness in the analytical endeavor.
35. For a complementary view of the Tristan Prelude, see Robert P. Morgans demonstration that the
formal process consists of repeating units and processes of variation (a semiological reading, in
effect): Circular Form in the Tristan Prelude, Journal of the American Musicological Society 53
(2000): 69103.
CHAPTER 2
69
Example 2.6. Ten-note chord in Mahler, Tenth Symphony, first movement, bar 204.
36. David Lewin broaches this topic in the course of a discussion of two competing Schenkerian readings of the familiar Christmas hymn Joy to the World, set to a tune by Handel, in Music Theory,
Phenomenology, and Modes of Perception, in Lewin, Studies in Music with Text (Oxford: Oxford
University Press, 2006), 8588. One reading renders the tune as an 8 line ( Joy to the world), the
other as a 5 line (Joy to the world). Although Lewin recognizes that the Schenkerian reading does
not claim that the world is more important than joy, he nevertheless proceeds to explore the
metaphorical prospects for either reading. But since the Kopfton as imagined and postulated by
Schenker belongs to a sequence of idealized voices, not necessarily a flesh-and-blood occurrence,
its salience at the foreground (by means of accentual or durational prominence, for example) is not
a defining feature. Thus, to seek to interpret idealized voices hermeneutically is to seek to transfer
values across a systemic border. In our terms, it is to confuse the rhythm of the system with the
actual rhythm of the work.
70
PART I
Theory
hear first an A-flat minor chord (bar 194), then the 10-note chord on C-sharp
(204), whose resolution is delayed until the second half of bar 220. If we read
the A-flat minor chord enharmonically as G-sharp minor, we might hear the
entire passage as a iiVI cadence writ large. This conventional underpinning occurs elsewhere in Mahler (see, for example, the excerpt from Das Lied
von der Erde analyzed later in this book in example 4.34) and reinforces the
grounding of his musical language in the harmonic norms of the eighteenth
century. But the composing out of this progression incorporates numerous
modifications that ultimately take the sound out of an eighteenth-century
environment and place it squarely within a late nineteenth- or early twentiethcentury material realm.
From the point of view of harmonic syntax, the 10-note chord functions as a
dominant on account of its C-sharp grounding. Above it are two dominant-ninth
chords, one a minor ninth, the other a major ninth, on C-sharp and F-natural,
respectively. In other words, the 10-note chord combines the dominant-ninths of
the keys of F-sharp and B-flat. Since these are the two principal keys of the movement, the combination of their expanded dominants at this moment would be a
logical compositional move. Perceiving the separate dominants presents its own
challenges, of course, but the conceptual explanation probably presents no comparable difficulties. As in the Tristan Prelude, a combination of structural and expressive features marks this high point for consciousness.
The aftermath of the high point in the Mahler movement is worth noting
because of the way closurewhich typically follows the high pointis executed
(bars 213end). The charged dominant-functioning chord on C-sharp has set up
an expectation for resolution, which could have come as early as bar 214, allowing
for the 4-bar lead-in (209212) to the thematic return in 213. Mahler maintains
the C-sharp pedal for the first phase of this return (213216). With the tempo and
thematic change at 217, the pitch C-sharp persists, but is now incorporated in very
local VI progressions. It is not until the end of bar 220 that the proper resolution
occurs in the form of an authentic cadence featuring 3 in the top voice. This understated close cannot, it seems, provide the rhetorical weight needed to counter the
effect of the gigantic 10-note dominant, so a network of closing gestures, some of
them harmonic (bars 229230), others more melodic (bars 240243), is dispersed
throughout the closing bars of the movement. Cadential gestures, reminiscences
in the form of fragments, and strategically incomplete melodic utterances transfigure Mahlers ending. (This movement, incidentally, was the most complete in draft
form of all of the movements of the unfinished Tenth.)37
Talk of high points, then, dovetails into talk of closure, for it would seem
that the logical thing after the attainment of a high point is to engineer a close.
And this reinforces the point made earlier that, although the six criteria being
developed in this chapter and in chapter 3 have been chosen in order to focus
attention on specific features and mechanisms, they are not wholly autonomous.
37. For further discussion, see Agawu, Tonal Strategy in the First Movement of Mahlers Tenth Symphony, 19th-Century Music 9(2) (1986): 222233.
CHAPTER 2
71
72
PART I
Theory
[missing]
pp
mp f
ff
fff
mf
pp
Example 2.8. Closing bars of Bartk, Music for Strings, Percussion and Celesta, first
movement.
Vn 1
Vn 2
CHAPTER 2
73
In any case, we must not draw too categorical a distinction between hearing
Romantic music as language and hearing post-Romantic music as system or even
antisystem. Tempting as it is to interpret one as natural and the other as artificial,
or one as intuitionist, the other as constructivist, we might consider the more
likely reality to involve a blurring of boundaries, an interpenetration of the two
modes. Any such comparisons have to be validated analytically. In the brief examples that we have seen, the hierarchic subsumption of scale steps in Wagner and
Mahler contrasts with the contextual centricity of Bartk. But Bartk, too, employs
some of the same secondary parameters used by earlier composers.
The high point, then, is a central feature of Romantic foregrounds and belongs
in any taxonomy of criteria for analysis. Context is everything in analysis, so one
should always look to it for clarification of ambiguous situations. As a quality, the
high point embodies a sense of the supreme, extreme, exaggerated, and superlative,
and these qualities are often distributed across several of a works dimensions. My
comments on passages from Schubert, Chopin, Mahler, Wagner, and Bartk will,
I hope, have confirmed the view that some attention to these ostensibly surface
features might enhance our appreciation of the particularities of Romantic music.
C HA P T E R
Three
Criteria for Analysis II
Periodicity
A period is a regulating framework for organizing musical content. Every largescale musical utterance needs to be broken down into smaller chunks in order to
assure communication and comprehensibility. Like sentences, phrases, or paragraphs in verbal composition, periods serve as midlevel building blocks, markers
of a compositions sense units. Does the subject speak in short or long sentences?
How does the succession of periods define an overall form or structural rhythm
for the work? Are periodic rhythms interchangeable or are they fixed?
The enduring tradition of Formenlehre has devised elaborate sets of terms,
concepts, and modes of symbolic representation for the description of this vital
aspect of music. Some offer general theories of formal organization, some illuminate specific historical styles, some prescribe actions for composition, while others offer models for analytic description. Taxonomies abound in theoretical and
analytical writings by Burmeister, Koch, A. B. Marx, Riemann, Schoenberg, Tovey,
Ratner, Rosen, Rothstein, Caplin, and Hepokoski and Darcyto mention only a
dozen names. The very large number of writings on this topic suggests that, for
many scholars, form as a specifically temporal or periodic experience lies at the
core of musical understanding and enjoyment.
A rapid overview of some of the terms and concepts employed by a few leading theorists to describe the temporal aspects of musical structure will provide
an indication of the range of techniques and effects that originate in notions of
periodicity. According to Ratner, periodicity represents the tendency . . . to move
toward goals, toward points of punctuation. . . . [A] passage is not sensed as being a
period until some sort of conclusive cadence is reached. . . . The length of a period
cannot be prescribed. Among the terms he employs are motion, points of arrival,
symmetry (including disturbances of symmetry), sentence structure, period
extensions, and internal digressions.1 Like Ratner, William Rothstein draws on
1. Ratner, Classic Music, 33.
75
76
PART I
Theory
contemporaneous and more recent theories in his study of eighteenth- and nineteenth-century phrase rhythm. His terms include phrase (including fore-phrase
and after-phrase), phrase rhythm, phrase linkage, phrase expansion, prefix, suffix,
parenthetical insertion, hypermeasure, lead-in, elongated upbeat, and successive
downbeats.2 Caplins theory of form draws on various kinds of cadence (abandoned, authentic, elided, evaded); cadential progression; concluding, initiating,
and medial functions; period; interpolation; and sentence.3 And Christopher
Hastys vocabulary is chosen to capture the temporal aspects of musical experience and to distinguish diverse temporalities: motion, projective process, deferral,
now, instant, timelessness, and denial.4
Every listener to Romantic music possesses an intuitive understanding of periodicity. When we hear a Chopin prelude, a Liszt song, a Brahms intermezzo, a
Bruckner motet, or a Verdi aria, we are aware of its ebbs and flows, its high and low
points, its moments of repose and dynamism. We routinely sense that a thought
has been concluded here, that a process begun earlier has been abandoned, or
that an event of a certain gravity is about to take place. Listeners who also move
(silently) to music and dancers who respond physically are alert to regularities and
irregularities in phrasing and groupings beyond the beat level. Schubert proceeds
in measured groupings throughout the C Major Quintet, but disrupts the periodicity from time to time, deploying unison passages to move the discourse self-consciously from one state of lyric poetry to another. Schumanns Piano Quintet, op.
44, does not disappoint when it comes to 4-bar phrases, but we are aware, too, of
the speech mode that intrudes here and there and projects an alternative periodicity; sometimes, we sense a cranking of gearsas if to break an ongoing periodicity
in order to introduce a new one. And in Mendelssohns Violin Concerto, the manner of delivery is controlled by 4-bar units. This partly facilitates exchange between
(the more restricted) orchestral discourse, on the one hand, and (the freer) soloists
narrative, on the other. It also contributes to the more or less immediate comprehensibility of the message in a genre whose unabashed suasive intent generally
leaves few aspects of outer form to the connoisseurs imagination.
The most important consideration for analysis is the sense of periodicity, by
which I mean the tendency of the sonic material to imply continuation and to
attain a degree of closure. To get at this quality, I will ask the same three questions that were introduced at the end of chapter 1: Where does the motion begin?
Where does it end? How does it get there? These questions are meant to help channel our intuitions about the shape of the compositional dynamic and to guide the
construction of periodicity.
One final, general point needs to be made before we begin the analyses.
Although the large literature dealing with form, rhythm, and periodicity in tonal
music has made available a wealth of insights, one aspect of the literature suggests that we might think a little differently. Too many studies of Romantic music
2. William Rothstein, Phrase Rhythm in Tonal Music (New York: Schirmer, 1989).
3. Caplin, Classical Form.
4. Christopher Hasty, Meter as Rhythm (New York: Oxford University Press, 1997).
CHAPTER 3
77
78
PART I
Theory
The stepwise melody played by second violins starting in the second half of bar
23 leads so directly to the new beginning initiated in bar 25 that one is inclined to
hear an elision of phrases. Bar 24, analogous to bar 16 in one hearing, suggests a
Example 3.1. Mahler, Symphony no. 4, third movement, bars 161.
Ruhevoll (poco adagio)
12
18
16
23
30
27
37
(continued)
CHAPTER 3
79
44
41
51
48
54
58
80
PART I
Theory
(bar 29) onward. Mahler pulls all the usual stops available to the Romantic composera high register that we know to be unsustainable, a rounding-up circle-offifths harmonic progression (EAD[and, eventually]G), chromatic inflection,
and perhaps most significant, arrival on a dominant-functioning 6/4 chord in bar
31, an archetypal sign of impending closure. The sense of dominant will extend
through bars 31 to 36, conferring on the entire 2536 phrase a concluding function. In colloquial terms, it is as if we began the movement singing a song that
we did not finish (116), repeated it in an intensified version without attaining
closure (1724), and then sang it in an even more intensified version, reaching a
longed-for cadence on this third attempt (2537). These qualities of intensification
and repetition reside in the domain of periodicity; we might even say that they
embrace the whole of the music.
Bar 37 marks the onset of yet another beginning in the movementthe fourth,
in the larger scheme of things. The by-now-familiar pizzicato bass (that Schubert
played on the piano to accompany his singer in Wo ist Sylvia?) is heard, and it
recalls the three previous beginnings at 1, 17, and 25, only now doubled an octave
lower. The uppermost melody is now tinged with a sense of resignation, dwarfed
in its ambitions by the melodic achievements of the previous period. By now, we
are beginning to sense a circularity in the overall process. Perhaps the formal
mode here is one of variation. This fourth period, however, is soon led to a decisive
cadence in bars 4445, and the attainment of the cadence is confirmed by a conventional IIVVI progression, complete with a tonicization of IV (bars 4647).
The fact that the music beginning in bar 37 showed no ambitions initially and then
moved to enact a broad and decisive cadence confers on this fourth period a sense
of closing, a codetta-like function, perhaps. After the cadence in bars 3637, we
might have sensed a new, strong beginning. But the simulated strength was shortlived, and the cumulative pressure of closure held sway, hence the big cadence in
bars 4445.
The relative strength of the cadence in bars 4445 sets the character of the
joins between phrases into relief. Every time we locate a musical process as beginning in a certain measure, we are in danger of lying, or at least of undercomplicating a complex situation. Attending to periodicity and the tendency of the material
is a useful way of reminding ourselves of how fluid are phrase boundaries and
how limited are conventional means of analytic representation. Consider the succession in bars 1617. The approach to bar 16 tells the listener that we are about
to make a half-cadence. Indeed, Mahler writes an apostrophe into the score at this
point, registering the separateness of the moment and the musical thought that
it concludes. There is therefore, strictly speaking, no cadential (VI) progression
between bars 16 and 17. Moreover, bar 17 is marked by other signs of beginning (entrance of a new melody, return of the old melody slightly decorated)
thus encouraging us to hear 16 as concluding a processalbeit an incomplete
one. The join at bars 2425, too, is illusory. Here, too, we should, strictly speaking,
not imagine an authentic cadence because the approach to 24 effects the manner
of a conclusion of an incomplete thought, a conclusion prepared by a passionate melodic outburst. What complicates this second join is the rather deliber-
CHAPTER 3
81
ate physical movement led by the dueting voices, second violin and cello. In a
more nuanced representation, we might say that the 2425 join conveys a greater
dependency than the 1617 join, but that neither join approximates a proper
cadence.
The third main join in the movement is at 3637. Here, the sense of cadence
is harder to ignore. I mentioned the big dominant arrival in 31, which harmonic
degree is extended through 36, finding resolution at the beginning of 37. Thus,
while bars 116 and 1724 each end with a half-cadence, bars 2537 conclude
with a full cadence. Notice, however, that the join in 3637 is undermined by the
return of our pizzicato bass, by now a recognizable signifier of beginnings. We
might thus speak of a phrase elision, whereby 37 doubles as the end of the previous
phrase and the beginning of another.
From bar 37 on, the business of closing is given especial prominence. If the
cadence in bars 3637 is authentic but perhaps weakly articulated because of the
phrase elision, the next cadence in bars 4445 is a stronger authentic cadence.
whereas that in
The melodic tendency in bars 4344 sets up a strong desire for 1,
3637, passing as it does from 5 through 4 to 3, forgoes the desire for 1 . In addition,
the precadential subdominant chord in bar 43 strengthens the sense of cadence in
bars 4445. Working against this, however, is the new melody sung by bassoons
and violas beginning in bar 45, which Mahler took over from an earlier symphony.
Again, the elision weakens the cadence and promotes continuity, but not to the
same extent as happened in bars 3637. The intertextual gesture also underlines
the discontinuity between bars 44 and 45.
The next punctuation is the authentic cadence at bars 5051. Some listeners
will hear the pizzicato bass notesreinforced, this time, by harpnot only as a
sign of beginning, as we have come to expect, but, more important, as a sign of
endingthe poetic effect that Brahms, among other composers, uses to signal
ultimate home-going. Since the close in 5051 comes only 7 bars after the one in
4445, our sense that we are in the region of a global close is strengthened. Indeed,
from bar 51 onward, the musical utterance gives priority to elements of closure.
Launching this last closing attempt are bars 5154. Then, bars 5556 make a first
2 melodic progression harmoattempt at closure, complete with an archetypal 3
nized conventionally as IV7 (V7 is literally expressed by V4/2 but the underlying
sense is of V7). A second attempt is made in 5758, also progressing from IV. The
third attempt in 5961 remains trapped on I. It is then stripped of its contents,
reduced to a single pitch class, B, which in turn serves as 5 in the E-minor section
that follows in bar 62.
With the benefit of hindsight, we may summarize the segments or units that
articulate a feeling of periodicity as follows:
116
1724
2537
3745
4551
16 bars
8 bars
13 bars
9 bars
7 bars
5154
5556
5758
5961
4 bars
2 bars
2 bars
3 bars
82
PART I
Theory
That some segments are relatively short (2, 3, or 4 bars) while others are long (8, 9,
13, or 16 bars) may lead some readers to suspect that there has been a confusion
of levels in this reckoning of periodicity. But the heterogeneity in segment length
is a critical attribute of the idea of periodicity being developed here. Periods must
be understood not as fixed or recurring durational units but as constellations that
promote a feeling of closing. If it takes 16 bars to articulate a sense of completion,
we will speak of a 16-bar period. If, on the other hand, it takes only 2 bars to convey the same sense, we will speak of a 2-bar period. Periodicity in this understanding is similar to the function of periods in prose composition; it is intimately tied
to the tendency of the musical material. It is not necessarily based on an external
regulating scheme. Of course, such schemes may coincide with the shapes produced by the inner form, but they need not do so and often do not. The listener
who attends to the specific labor of closing undertaken by individual segments of a
work attends to more of the overall sense of the music than the listener who defers
to the almost automatic impulse of a regulating phrase-structural scheme.
Finally, it is evident that talk of periodicity is implicitly talk of some of the
other features that I am developing in this and the previous chapter. The idea of
beginnings, middles, and endings is germane to the experience of periodicity.
Similarly, high points mark turning points within the form and are likely to convey
the periodic sense of a work.
Schubert, Im Dorfe
Periodicity in song is a complex, emergent quality. The amalgamation of words
and music expands the constituent parameters of a work. The poem comes with its
own periods, its own sense units, and when words are set to music, they develop
different or additional periodic articulations based on their incorporation into a
musical genre. And if it is to remain coherent within the constraints of its own
language, the music must be subject to certain rules of well-formedness. It must, in
other words, work at the dual levels of langue and parole, that is, conform simultaneously to the synchronic state of early nineteenth-century tonal language and to
the peculiarities, mannerisms, and strategies of the composer.
The harmonic trajectory of Schuberts Im Dorfe from his Winterreise cycle
exemplifies such well-formedness. An atmospheric night song, Im Dorfe exemplifies a mode of periodicity based on the statement and elaboration of a simple, closed
harmonic progression. I will have more to say about harmonic models and processes
of generation in coming chapters. Here, I simply want to show how simple transformations of an ordinary progression confer a certain periodicity on the song.5
CHAPTER 3
83
Example 3.2 shows the nine periods of Im Dorfe in the form of chorale-like
harmonic summaries. Ordered chronologically, the periods are as follows:
Period 1
Period 2
Period 3
Period 4
Period 5
bars 18
bars 819
bars 2021
bars 2223
bars 2325
Period 6
Period 7
Period 8
Period 9
bars 2628
bars 2931
bars 3140
bars 4049
84
PART I
Theory
becomes
becomes
truncated to
becomes
or
becomes
bars 3140 (period 8). Expansion of this model begins in bar 36, where the 6/4
is minor rather than major, which then opens up the area around B-flat. B-flat
later supports an augmented-sixth chord that prepares the elaborate hymn-like
cadence of bars 3840. (The similarity between the chorale texture of Schuberts
CHAPTER 3
85
music in these bars and the chorale texture employed in the demonstration of our
harmonic models may provide some justificationif such were neededfor the
exercise represented in example 3.2.) This model is now repeated as period 9 in
bars 4049, with bars 4649 functioning simply as an extension of tonic. In short,
the A' section consists of two longer periods, 8 and 9.
If the sense units of Schuberts Im Dorfe as described here are persuasive, we
can appreciate one aspect of Schuberts craft. In responding to the mimetic and
declamatory opportunities presented by a verbal text, he retains a secure harmonic
vision distributed into nine periods of varying length. Longer periods occur in the
outer sections while shorter ones occur in the more fragmented middle section.
The periodicity story is, of course, only one of several that might be told about the
song, but the strength of Schuberts harmonic articulation may encourage us to
privilege this domain in constructing a more comprehensive account of the songs
overall dynamic.
86
PART I
Theory
4 bars comprise period 1, and this may also be taken as the model for harmonic
motion in the song. Beginning on the tonic, period 1 outlines the subdominant
area, incorporates mixture at the beginning of bar 3 (by means of the note
A-flat), and then closes with a perfect cadence. The second period is twice as
long (bars 412), but it covers the same ground, so to speak. That is, the bass line
descends by step, filling in the gaps left in the model (period 1). And the bass
approaches the final tonic also by step, including a chromatic step. Bars 412 are
therefore a recomposition of 14.
Example 3.3. Periodic structure of Schumanns Ich grolle nicht, from Dichterliebe.
Period 1
(bars 1-4)
6
7
B
4
3
4
3
4
3
G
#
G
#
4
3
4
3
8 - 9 6 - 7 4
Period 2
(bars 4-12)
H
4
3
4
3
7
6
6
5
4
3
7
B
6
#
6
5
7
B
8-7
6-5
Period 3
(bars 12-19)
Period 4
(bars 19-22)
6
7
B
4
3
Period 5
(bars 22-30)
H
4
3
Period 6
(bars 30-36)
4
3
(
6
4
8
6 - 5
4 - 3
The next period, period 3, occupies bars 1219, overlapping with periods on
either side. Here, the IIV6 motion from the beginning of period 1 is expanded
by the incorporation of the dominant of vi, so that instead of IIV6, we have IV/
vivi. And the close of the period, bars 1619, uses the same bass progression as
that at the end of the previous period (912; the notes are GAABC). Given
these affinities, we can say that this third period is a recomposition of the second,
itself an expansion and reconfiguring of the first. Note that periods 2 and 3 are
of roughly the same length in bars, while period 1, the model, is about half their
length. In other words, periods 2 and 3 are (temporally) closer than 2 and 1 or, for
that matter, 3 and 1. Again, the feeling of periodicity is conveyed by punctuation
(including its absence), not by phrase length in bars.
Period 4 is identical to period 1. Period 5 begins and continues in the manner of period 2 but closes with a decisive cadence in bars 2830. This is exactly
the same cadence that concluded periods 1 and 4. Thus, the idea that period 5
recomposes period 4 is enhanced. Thematically speaking, period 5 is a recomposition of period 2, but it incorporates the cadence pattern of periods 1 and 4. Since
CHAPTER 3
87
Stravinsky, The Rakes Progress, act 1, scene 3, Annes aria, Quietly, night
When the intrinsic tendencies of dominant- and diminished-seventh sonorities
no longer form the basis of a composers idiolect, periodicity has to be sought in
other realms. Sometimes, it is conferred in retrospect rather than in prospect, the
88
Theory
PART I
Qui
rt
night,
al - though it be un - kind,
though I weep,
et - ly
al - though I
Guide
me
grief
or shame;
weep,al
o moon,
It
find
Nor
him
and
may
its
- though I weep, it
chaste - ly
can
ca - ress, And
beat
con- fess
Al -
not, can -
not
be
thou a -
How are the two strophes enacted? We begin with a 1-bar orchestral vamp,
then Anne sings her phrases in 2-bar segments. The phrase Although I weep bears
the climactic moment. Stravinsky allows Anne a rest before intoning the F-sharp
in bar 12 to begin this intensified expression. Twice she sings the phrase, ending,
first, with a question mark (bar 15) and, on the second try, with a period (bar 18).
The second try continues the verbal phrase to the end, Although I weep, it knows
CHAPTER 3
89
of loneliness. The close on B minor in bar 18 recalls the opening and, in the more
recent context, answers directly to the dominant of bar 15. Anne also manages
2
1 melodic motion on the second and
to incorporate a tiny but significant 3
third syllables of loneliness. Woodwinds immediately echo the second of Annes
climactic phrases in part as remembrance and in part as a practical interlude (or
time out for the singer) between two strophes.
The listeners ability to form a synoptic impression of this first period (bars
120) is made challenging by the additive phrase construction. There is no underlying periodic framework except that which is conveyed by the pulsation. But pulsation is only the potential for periodicity, not periodicity itself. It is Annes two
climactic phrases that gather the strands together and compel a feeling of closure.
Here, it is difficult to predict the size of individual periods. We simply have to wait
for Stravinsky to say what he wants to say and how.
Guide me, O moon begins the second strophe to the melody of Quietly,
night. For 4 bars, strophe 2 proceeds as an exact repetition of strophe 1, but at the
words And warmly be the same, Stravinsky takes up the material of the climactic phrase Although I weep. The effect is of compression: the preparatory processes in bars 52121 are cut out, bringing the climactic phrase in early. The reason
for this premature entrance in bar 24 is that the climactic region is about to be
expanded. Anne now intensifies her expression by affecting a coloratura manner
on her way to the high point of the aria, the note B-natural lasting a full bar and
a half (bars 3031) and extended further by means of a fermata. This duration is
entirely without precedent in the aria. Indeed, the string of superlatives that mark
this momentincluding the withdrawal of the orchestra at the end of the note so
that the timbre of Annes voice does not compete with any other timbreis such
that little can or need be said in the aftermath of the high point. Anne simply
speaks her last line (A colder moon upon a colder heart) in a jagged, recitativelike melody that spans two octaves from the high B of the climax to the B adjacent
to middle C. The orchestra comments perfunctorily, without commitment.
Two aspects of the periodicity of Annes aria may be highlighted here. First,
the apparent two-stanza division that enabled us to speak of two large periods is,
as previously noted, somewhat limited. The second period (starting in bar 21) is
not merely a repeat of the first, although it begins like it and goes over some of
its ground; it is rather a continuation and intensification of it. We might speak
legitimately here, too, of a Romantic narrative curve beginning at a modest level,
rising to a climax, and then rapidly drawing to a close. A second aspect concerns
the modes of utterance employed by Anne. These modes reflect greater or lesser
fidelity to the sound of sung language. If we locate three moments in the vocal
trajectory, we see that Anne begins in speech mode or perhaps arioso mode (bars
23), reaches song mode at Although I weep, and then exceeds song mode at the
words It cannot, cannot be thou art. The latter is an extravagant vocal gesture
that, while singable, begins to approximate instrumental melody. This moment of
transcendence is followed by withdrawal into speech modea strictly syllabic,
unmelodic rendering of A colder moon upon a colder heart, finishing with a
literal, speech-like muttering of a colder heart on four B-naturals with durations
that approximate intoned speech. In sum, Anne begins in speech mode, reaches
90
Theory
PART I
song mode, then a heightened form of song mode, before collapsing into speech
mode. Again, according to this account of modes of utterance, the periodic sense
cuts across the aria as a whole, forming one large, indivisible gesture.
pp
1
p dolce
Piano
poco rall.
a tempo
poco rall.
a tempo
espr.
3
mp
12
dim.
dim.
pp
(attacca)
Bartk uses a Hungarian folk song that he had collected in 1907 as the basis
for this written-down improvisation. The compositional conception, therefore, is
from song to instrumental music. The idea in the First Improvisationas indeed
in several of the othersis to preserve the folk source, not to transform it. The
melody is 4 bars long, and Bartk presents it three times in direct succession and
then appends a 4-bar codetta. The first presentation borrows the opening majorsecond dyad from the melody (FE) and uses it at two different pitch levels to
CHAPTER 3
91
accompany the song. The pitch material of the accompaniment, although sparse,
is wholly derived from the melody itself. In terms of periodicity, these first 4 bars
retain the intrinsic periodicity of the melody itself. One aspect of that periodicity is evident in the rhythmic pattern: bars 13 have the same pattern while bar
4 relinquishes the dotted note and the following eighth-note. In the tonal realm,
bar 1 presents an idea, bar 2 questions that idea by reversing the direction of the
utterance, bar 3 fuses elements of bars 1 and 2 in the manner of a compromise as
well as a turning point, and bar 4 confers closure by incorporating the subtonic
(B-flat). These 4 bars promote such a strong sense of coherence and completeness
that they may be said to leave little room for further expectations; they carry no
implications and produce no desires. Whatever follows simply follows; it is not
necessitated by what preceded it.
The second statement of the melody (bars 58) replaces the dyads of the first
statement with triads. Triadic succession, though, is organum-like or, perhaps,
Debussy-like, insofar as the triads move in parallel with no obvious functional
purpose. In other words, the melody merely reproduces itself in triadic vein. Thus,
the trajectory of the original melody remains the determinant of the periodic
sense of these second 4 bars. By merely duplicating the melody, this second 4-bar
period is figured as simpler and more consonant than the first; it incorporates
no contrapuntal motion. As before, no internal expectations are generated by the
triadic rendition of the Hungarian folk song. Formal expectations will, however,
begin to emerge from the juxtaposition of two statements of the folk song. We may
well suspect that we will be hearing it in different environments.
The third statement (bars 912) turns out to be the most elaborate. Beginning
each of the first 3 bars with a D-minor triad, the second half uses what may well
come across as nontonal sonorities.7 The melody, now doubled, is placed where it
should be, namely, in the upper register (this contrasts with the first two appearances of the folk song). This third occurrence marks the expressive high point
of the improvisation, not only because Bartk marks it espressivo but because of
the thicker texture, the more intense harmonies, the more salient projection of the
melody, and the full realization of a melody-accompaniment relationship.
This climactic region (bars 912) is alsoand more obviouslyknown in
retrospect. The last 4 bars of the improvisation do not begin by stating the folk
melody as before. Rather, the first of them (bar 13) echoes the last bar of the
folk melody in a middlethat is to say, unmarkedregister; then the next three
present an intervallically augmented version of the same last bar, modifying the
pitches but retaining the contour. The harmony supporting this melodic manipulation in the last 4 bars is perhaps the most telling in conveying a sense of closure.
A succession of descending triads on E-flat minor, D minor, and D-flat minor
seems destined for a concluding C major, but this last is strategically withheld and
represented by a lone C, middle C. Bars 1316 may be heard as a recomposition of
7. Paul Wilson finds instances of pitch-class sets 418 and 516 in this 4-bar period, thus acknowledging the nontriadic nature of the sonorities. See Wilson, Concepts of Prolongation and Bartks opus
20, Music Theory Spectrum 6 (1984): 81.
92
PART I
Theory
bar 8one difference being that the missing steps are filled inand, more broadly,
as a reference to the entire second statement of the folk melody in bars 58.
The feel of periodicity in this improvisation is somewhat more complex. Selfcontained are the first three 4-bar phrases, so they may be understood from a
harmonic, melodic, and phrase-structural point of view as autonomous, as small
worlds in succession. The last 4-bar phrase carries a closing burden: it embodies
a conventional gesture of closureecho what you have heard, slow things down,
and let every listener know that the end is nigh. Indeed, to call it a 4-bar phrase
is to mislead slightly because there are no internal or syntactical necessities to
articulate the 4-bar-ness. The number 4 depicts a default grouping, not a genuine
syntactical unit. (The recomposition in example 3.6 compresses Bartks 4 bars to
2, but their periodic effect is not really different from the original.) By eschewing the dependencies of common-practice harmony, without however dispensing
with gestures of intensification and closure, the compositional palette in this work
becomes diversified. This is not to say that common-practice repertoires are lacking this potential autonomization of parameters. It is rather to draw attention to
their more obvious constructive role in Bartks language.
Example 3.6. Recomposed ending of Bartk, Improvisations for Piano, op. 20, no. 1.
Bartk's original
dim.
pp
(attacca)
Bartk's recomposed
dim.
The foregoing discussion of periodicity should by now have made clear that
periodicity is a complex, broadly distributed quality that does not lie in one parameter. It is an emergent, summarizing feel that enables us to say that something
begun earlier is now over or about to be concluded. Talk of periodicity therefore
necessarily involves us in talk about some of the other criteria for analysis that I
have been developing in chapter 2 and the present chapter. Like form, the notion
of periodicity embraces the whole of music. Focusing on punctuation and closure
and their attendant techniques helps to draw attention to this larger, emergent
CHAPTER 3
93
8. But see Hattens Interpreting Musical Gestures, Topics, and Tropes, 267286.
94
PART I
Theory
Carolyn Abbate describes the onset of the so-called Gesang theme as an interruption
. . . a radically different musical gesture. For her, this moment marks a deep sonic
break; indeed, cracks fissure the music at the entry of the Gesang. 9 These characterizations ring true at an immediate level. This otherworldly moment is clearly
marked and maximally contrasted with what comes before. Difference embodies
discontinuity. Note, however, that this characterization works in part because it
refuses technical designation. If, instead of responding to the aura of the moment,
we seek to understand, say, the motivic logic or the nature of succession in the
realms of harmony, voice leading, or even texture, the moment will seem less radically discontinuous and more equivocal. For one thing, in the bars preceding the
onset of the Gesang theme, a triplet figure introduced in the bass continues past
the ostensible break and confers an element of motivic continuity. Attending to the
voice leading in the bass, too, leads one to a conjunct descent, CCB, the ostensible crack occurring on C-flat. On the other hand, texture and timbre are different,
as are dynamics and the overall affect. Thus, while the action in the primary parameters presents a case for continuity, the action in the secondary parameters presents
a case for discontinuity. Recognizing such conflicting tendencies by crediting the
potential for individual parameters to embody continuity or discontinuity may help
to establish a more secure set of rules for analysis. My task here, however, is a more
modest one: to cite and describe a few instances of discontinuity as an invitation to
students to reflect on its explanatory potential.
Looking back at the classical style as point of reference, we can readily recall
moments in which discontinuity works on certain levels of structure. A good
example is the first movement of Mozarts D Major Sonata, K. 284, which I mentioned in the previous chapter on account of its active topical surface. A change
of figure occurs every 2 bars or so, and listeners drawn to this aspect of Mozart
are more likely to infer difference, contrast, and discontinuity than smooth continuity. Indeed, as many topical analyses reveal and as was implied in discussions
of character in the eighteenth century, the dramatic surface of classic music
sometimes features a rapid succession of frames. There is temporal succession,
but not progression. Things follow each other, but they are not continuous with
each other.
Think also of the legendary contrasts, fissures, and discontinuities often heard
in the late music of Beethoven. In the Heiliger Dankgesang of op. 132, for example, a slow hymn in the Lydian mode alternates with a Baroque-style dance in 3/8,
setting up discontinuity as the premise and procedure for the movement. The very
first page of the first movement of the same quartet is even more marked by items
of textural discontinuity. An alla breve texture in learned style enveloped in an
aura of fantasy is followedinterrupted, some would sayby an outburst in the
form of a cadenza, then a sighing march tune in the cello, then a bit of the sensitive
9. Abbate, Unsung Voices, 150151. See Agawu, Does Music Theory Need Musicology? Current Musicology 53 (1993): 8998, for the context of the remarks that follow. A discussion of discontinuity in
Beethoven can also be found in Lawrence Kramer, Music as Cultural Practice (Berkeley: University
of California Press, 1990), 190203 and in Barbara Barry, In Beethovens Clockshop: Discontinuity
in the Opus 18 Quartets, Musical Quarterly 88 (2005): 320337.
CHAPTER 3
95
96
PART I
Theory
(
Strahlt
Lie - be,
dein
Stern!
)
Dir,
they are syntactically dispensable. Both the opening and closing of a parenthesis
enact a discontinuity with the events that precede and follow, respectively. In the
harmonic realm, for example, a parenthesis may introduce a delay in the approach
to a goal or enable a prolongation or even a sustaining extension for the sake of
play; it may facilitate the achievement of temporal balance or be used in response
to a dramatic need contributed by text. In the formal realm, a parenthesis may
introduce an aside, an insert, a by-the-way remark. Parentheses in verbal composition have a different significance from parentheses in musical composition. In
a verbal text, where grammar and syntax are more or less firmly established, the
status of a parenthetical insertion as a dispensable entity within a well-formed
grammatical situation is easy to grasp. In a musical situation, however, although we
may speak of musical grammar and forms of punctuation, an imagined excision of
the material contained in a so-called parenthesis often seems to deprive the passage in question of something essential, something basic. What is left seems hardly
worthwhile; the remaining music is devoid of interest; it seems banal. This suggests
that musical parentheses are essential rather than inessential. A grammar of music
that does not recognize the essential nature of that which seems inessential is likely
to be impoverished.
Consider a simple chordal example. In the white-note progression shown in
example 3.8, we can distinguish between structural chords and prolonging chords.
The sense of the underlying syntaxthe progressions harmonic meaningcan be
conveyed using the structural chords as framework. In that sense, the intervening
chords may be said to be parenthetical insofar as the structure still makes sense
without them. And yet the prolongational means are so organically attached to
the structural pillars that the deparenthesized progression, while able to convey
something of the big picture by displaying the structural origins of the original
passage, seems to sacrifice rather a lot. Indeed, what is sacrificed in this musical
CHAPTER 3
97
( )(
from
98
PART I
Theory
minor for an actual recapitulation (bar 84). The parenthetical passage is the only
sustained major-mode passage in the movement. Its expressive manner is intense.
I hear a foreshadowing of a passage from one of Richard Strausss Four Last Songs,
September, in bars 7275 of the Beethoven. Locally, the parenthetical material
continues the process of textural intensification begun earlier in the movement. If
the achievement of tonal goals is accorded priority, then interpreting bars 7080
as a parenthesis is defensible. But the material inside the parenthesis is dispensable
only in this limited sense.
Periodicity, then, embraces the whole of music. As a quality, it is distributed
across several dimensions. I have talked about cadences and cadential action, closure, high points, discontinuity, and parenthesis. The overarching quality is closure, including its enabling recessional processes. A theory of musical meaning is
essentially a theory of closure.
12. On corporeality in music, with an emphasis on Chopin, see Eero Tarasti, Signs of Music: A Guide to
Musical Semiotics (Berlin: de Gruyter, 2002), 117154.
CHAPTER 3
99
although it remains the unmarked mode for all Romantic composers. And the
speech mode, although hierarchically differentiated from the song mode, also possesses near-native status for composers; given the deep but ultimately problematic
affinities between natural language and music (discussed in chapter 1) and given
the qualitative intensification of word dependency in nineteenth-century instrumental practice, it is not surprising to find composers exploring and exploiting the
speech mode of enunciation to set the others into relief.
How are these three modes manifested in actual composition? In speech mode,
the instrument speaks, as if in recitative. The manner of articulation is syllabic,
and resulting periodicities are often asymmetrical. Song and dance modes inhabit
the same general corner of our conceptual continuum. Song mode is less syllabic
and more melismatic. Periodicity is based on a cyclical regularity that may be broken from time to time for expressive effect. And, unlike speech mode, which is not
obligated to produce well-formed melody, the song mode puts melody on display
and calls attention to the singing voice, be it an oboe, English horn, violin, flute, or
piano. Song mode departs from the telling characteristic of speech. The impulse
to inform or deliver a conceptually recoverable message is overtaken by an impulse
to affect, to elicit a smile brought on by a beautiful turn of phrase. Accordingly,
where speech mode may be said to exhibit a normative past tense, song mode is
resolutely wedded to the present. While the dance mode often includes song, its
most marked feature is a sharply profiled rhythmic and metric sense. The invitation to danceto dance imaginativelyis issued immediately by instrumental
music in dance mode. This mode is thus deeply invested in the conventional and
the communal. Since dance is normally a form of communal expression, the stimulus to dance must be recognizable without excessive mediation. This also means
that a new dance has to be stabilized over a period of time and given a seal of social
approval. A new song, by contrast, has an easier path to social acceptance.
As always with simplified models like this, the domains of the three modes are
not categorically separate. I have already mentioned the close affinity between song
mode and dance mode. A work whose principal affect is located within the
song mode may incorporate elements of speech. Indeed, the mixture of modes is
an important strategy for composers of concertos, where the rhetorical ambitions
of a leading voice may cause it to shift from one mode to another in the manner
of a narration.
Examples of the three modes abound, but given our modest purposes in this
and the previous chapternamely, to set forth with minimum embroidery certain basic criteria for analysiswe will mention only a few salient examples and
contexts. Late Beethoven is especially rich in its exploitation of speech, song, and
dance modes. Scherzos, for example, are normative sites for playing in dance
mode. A good example is the scherzo movement of the Quartet in B-flat Major,
op. 130. Dance and song go hand in hand from the beginning. They have different trajectories, however. Once the dance rhythm has been established, it maintains an essential posture; we can join in whenever we like. Song mode, on the
other hand, refuses a flat trajectory. The degree of songfulness may be intensified
(as in bars 9ff.) or rendered normatively. Of particular interest in this movement,
however, is Beethovens speculative treatment of the dance. While the enabling
100
PART I
Theory
Example 3.9. The speech mode in Beethoven, String Quartet in B-flat Major, op.
130, third movement, bars 4863.
L'istesso tempo.
dim.
ritar
dan
do
ritar
dan
do
ritar
dan
do
ritar
dan
do
p
56
60
f
64
pp
pp
pp
pp
67
CHAPTER 3
101
listeners persona merges with the composers for a brief moment. Finally, in bar
64, Beethoven reactivates the dance and song modes by shutting the window that
allowed us a peek into his workshop and reengages the listener as dancer for the
remainder of the movement.
Similarly striking is the invocation of speech mode in the transition from
the fourth to the fifth movements of the Quartet in A Minor, op. 132. The fourth
movement, marked alla Marcia, begins as a 24-bar march in two-reprise form.
Marching affects the communality of dance mode. Immediately following is an
invocation of speech mode in the form of a declamatory song, complete with
tremolo effects in the lower strings supporting the first violin. This emergence of a
protagonist with a clear message contrasts with the less stratified stance of the preceding march. In this mode of recitative, meter and periodicity are neutralized, as
if to neutralize the effect of the march, which, although written in four, succumbs
to groupings in three that therefore complicate the metrical situation. (The coming
finale will be in an unequivocal three.) The rhetorical effect of this instantiation of
speech mode is to ask the listener to waitwait for a future telling. But the gesture
is fake, a simulation; there is nothing to be told, no secrets to be unveiled, only the
joy of playing and dancing that will take place in the finale. These games never fail
to delight.
Robert Schumann is one composer in whose music the speech and song
modes of enunciation play a central role. Numerous passages in the favorite Piano
Quintet tell of a telling, stepping outside the automatic periodicity built on 4-bar
phrases to draw attention to the music itself, thus activating the speech mode. The
D Minor Symphony, too, features moments of speech whose effect is made more
poignant by the orchestral medium. His songs and song cycles are rich sources of
this interplay; indeed, they are very usefully approached with a grid based on the
interaction between speech and song modes. In Dichterliebe, for example, song 4
unfolds in the interstice between speech mode and song mode, a kind of declamatory or arioso style; song 9 is in dance mode; song 11 in song mode; and song
13 in speech mode. The postlude to the cycle as a whole begins in song mode by
recalling the postlude to song 12 (composers typically recall song, not speech). As
it prepares to close, the speech mode intrudes (bars 5960). Then, some effort has
to go into reclaiming the song mode, and it is in this mode that the cycle reaches its
final destination. (We will return to this remarkable postlude in connection with
the discussion of narrative below.)
When the poet speaks at the close of the Kinderszenen collection (Der Dichter
spricht), he enlists the participation of a community, perhaps a protestant one. A
chorale, beginning as if in medias res and inflected by tonal ambivalence, starts
things off (bars 18). This is song, sung by the congregation. Then, the poet steps
forward with an introspective meditation on the head of the chorale (bars 912).
He hesitates, stops, and starts. This improvisatory manner moves us out of the earlier song mode toward a speech mode. The height of the poets expression (bar 12,
second half) is reached by means of recitativespeech mode in its most authentic state. Here, we are transported to another world. Our community is now far
behind. We hold our breaths for the next word that the poet will utter. Speech,
not song, makes this possible. Then, as if waking from a dream, the poet joins the
102
PART I
Theory
congregation in beginning the chorale again (bar 18). In song mode, we are led
gradually but securely to a place of rest. The cadence in G major at the end has been
long awaited and long desired. Its attainment spells release for the body of singers.
None of these modes is written into Schumanns score. They are speculative
projections based on affinities between the compositions specific textures and conventional ones. The sequence I have derivedsong mode, speech mode, heightened
speech mode (recitative), and finally song modeseeks to capture the composers
way of proceeding. Of course, the modes are not discrete or discontinuous; rather,
they shade into each other in accordance with the poets modulated utterances.
Chopin, too, often interrupts a normative song mode with passages in speech
mode. The Nocturne in B Major, op. 32, no. 1, proceeds in song mode from the
beginning, until a cadenza-like flourish in bar 60 prepares a grand cadence in bar
61. The resolution is deceptive, however (bar 62), and this opens the door to a
recitative-like passage in which speech is answered in the manner of choral affirmation. In these dying moments of the nocturne, speech mode makes it possible
to enact a dramatic effect.
The three modes of enunciation introduced here are in reality three moments
in a larger continuum. As material modes, they provide a framework for registering
shifts in temporality in a musical work. In song, where words bear meaning and
also serve as practical vehicles for acts of singing, it is sometimes possible to justify
a reading of one mode as speech, another as song, and a third as dance by appealing
to textual meaning and by invoking a putative intentionality on the part of the composer. In nonprogrammatic instrumental music, by contrast, no such corroborative
framework exists; therefore, hearing the modes remains a speculative exercisebut
this says nothing about their credibility or the kinds of insight they can deliver.
Narrative
The idea that music has the capacity to narrate or to embody a narrative, or that
we can impose a narrative account on the collective events of a musical composition, speaks not only to an intrinsic aspect of temporal structuring but to a basic
human need to understand succession coherently. Verbal and musical compositions invite interpretation of any demarcated temporal succession as automatically endowed with narrative potential. Beyond this basic level, musics capacity
to craft a narrative is constantly being undermined by an equally active desirea
natural one, indeedto refuse narration. Accordingly, the most fruitful discussions of musical narrative are ones that accept the imperatives of an aporia, of a
foundational impossibility that allows us to seek to understand narrative in terms
of nonnarration. When Adorno says of a work of Mahlers that it narrates without
being narrative,13 he conveys, on the one hand, the irresistible urge to make sense
13. Theodor Adorno, Mahler: A Musical Physiognomy, trans. Edmund Jephcott (Chicago: University of
Chicago Press, 1992), 76.
CHAPTER 3
103
of a temporal sequence by recounting it, and, on the other hand, the difficulty of
locating an empirical narrating voice, guiding line, or thread. Similarly, when Carl
Dahlhaus explains that music narrates only intermittentlyakin, in our terms, to
the intrusion of speech mode in a discourse in song or dance modehe reminds
us of the difficulty of postulating a consistent and unitary narrative voice across
the span of a composition.14 Using different metalanguages, Carolyn Abbate, JeanJacques Nattiez, Anthony Newcomb, Vera Micznik, Eero Tarasti, Mrta Grabcz,
Fred Maus, and Lawrence Kramer have likewise demonstrated that it is at once
impossible to totally resist the temptation to attribute narrative qualities to a musical composition and, at the same time, challenging to demonstrate narratives
musical manifestation in a form that overcomes the imprecision of metaphorical
language.15
Ideas of narrative are always already implicit in traditional music analysis. When
an analyst asks, What is going on in this passage? or What happens next? or Is
there a precedent for this event? the assumption is often that musical events are
organized hierarchically and that the processes identified as predominant exhibit
some kind of narrative coherence either on an immediate level or in a deferred
sense. The actual musical dimensions in which such narratives are manifest vary
from work to work. A favorite dimension is the thematic or motivic process, where
an initial motive, figured as a sound term, is repeated again and again, guiding the
listener from moment to moment, thus embodying the works narrative. Another
ready analogy lies in tonal process, specifically in the idea of departure and return.
If I begin my speech in C major and then move to G major, I have moved away
from home and created a tension that demands resolution. The process of moving from one tonal area to another depends on the logic of narrative. If I continue
my tonal narrative by postponing the moment of return, I enhance the feeling of
narrative by setting up an expectation for return and resolution. The listener must
wait to be led to an appropriate destination, as if following the plot of a novel. And
when I return to C major, the sense of arrival, the sense that a temporal trajectory
has been completed, the sense that a promise has been fulfilledthis is akin to the
experience of narration.
On the deficit side, however, is the fact that, because of the high degree of
redundancy in tonal music, because of the abundant repetition which we as listeners enjoy and bathe in, a representation of narration in, say, the first movement of
Beethovens Fifth Symphony, comes off as impoverished and uninspiring insofar as
it is compelled, within certain dimensions, to assert the same thing throughout
the movement. The famous four-note motif (understood as rhythmic pattern as
104
PART I
Theory
CHAPTER 3
105
Example 3.10. Song mode and speech mode at the close of Schumanns Dichterliebe.
Adagio
50
Lie - be und
Andante espressivo
53 Song mode
56
Song mode
59
Speech mode
62
ritard.
65
At the start of example 3.10, the job of unveiling the ultimate destination of
the last song (and, for that matter, the cycle) is entrusted to the pianist. The singer
dropped off on a predominant sonority, leaving the pianist to carry the thought
106
PART I
Theory
Conclusion
Chapter 2 and the present chapter have introduced six criteria for the analysis
of Romantic music: topics; beginnings, middles, and endings; high points; periodicity (including discontinuity and parentheses); modes of enunciation (speech
mode, song mode, dance mode); and narrative. The specific aspects of this repertoire that the criteria seek to model are, I believe, familiar to most musicians. So,
16. For an imaginative, thorough, and incisive account of this postlude, see Beate Julia Perrey,
Schumanns Dichterliebe and Early Romantic Poetics: Fragmentation of Desire (Cambridge: Cambridge
University Press, 2003), 208255.
CHAPTER 3
107
although they are given different degrees of emphasis within individual analytical
systems, and although there are still features that have not been discussed yet (as
indeed will become obvious in the following two chapters), I believe that these six,
pursued with commitment, have the potential to illuminate aspects of meaning in
the Romantic repertoire. Readers are invited to plug in their favorite pieces and
see what comes out.
Is it possible to put the concerns of our criteria together into a single, comprehensive model? And if so, what would the ideological underpinning of such
a gesture be? The criteria offered here seek an account of music as meaningful
discourse, as language with its own peculiarities. They are not mutually exclusive,
as we have seen, but necessarily overlapping; some may even lead to the same
end, as is easily imagined when one student focuses on periodicity, another on
the high-point scheme, and a third on narrative. A certain amount of redundancy
would therefore result from treating the criteria as primitives in an axiomatic or
generative system. Nor are they meant to replace the processes by which musicians
develop intuitions about a piece of music through performance and composition;
on the contrary, they may function at a metalinguistic level to channel description
and analysis formed from a more intuitive engagement.
Does this imply that the more perspectives one has at ones disposal, the better?
If putting it all together were a statistical claim, then the more perspectives that
one could bring to bear on a work, the better the analysis. But this kind of control
is not my concern here. Making a good analysis is not about piling on perspectives
in an effort to outdo ones competitors; it has rather to do with producing good
and interesting insights that other musicians can incorporate into their own subsequent encounters with a work. The institutional burdensmarked during the
heyday of structuralist methods in the 1980sof having to deal with wholes rather
than parts and of publicizing only those insights that could be given a theoretical underpinning may have delayed the development of certain kinds of analytical insight. If, thereforeand respectfullyI decline the invitation to try and put
it all together here, I hope nonetheless that the partial and provisional nature of
these outcomes will not detract from the reader/listener enjoying such moments
of illumination as there may have been.
C HA P T E R
Four
Bridges to Free Composition
Tonal Tendency
Let us begin with the assumption that a closed harmonic progression constitutes
the norm of coherent and meaningful tonal order. Example 4.1 exemplifies such a
progression. Hearing it initially not as an abstraction but as a real and immediate
progression allows us to identify a number of tendencies fundamental to tonal
behavior and crucial to the development of a poetics of tonal procedure.
110
PART I
Theory
CHAPTER 4
111
1. Schenker, Der Tonwille: Pamphlets in Witness of the Immutable Laws of Music, vol. 1, ed. William
Drabkin, trans. Ian Bent et al. (Oxford: Oxford University Press, 2004), 66, 13.
2. Heinrich Schenker, Counterpoint: A Translation of Kontrapunkt, trans. John Rothgeb and Jrgen
Thym (New York: Schirmer, 1987), 175.
3. Schenker, Der Tonwille, 21.
112
PART I
Theory
not a method of composition but a way of training the ear, so in one sense it is
beside the point what (superficial) stylistic forms a particular structural procedure
takes. An 858585 intervallic pattern between treble and bass, an extensive
prolongation of the dominant via the flattened-sixth, or a delay in the arrival of
CHAPTER 4
113
4. There are exceptions, of course. Already in 1981, Jonathan Dunsby and John Stopford announced
a program for a Schenkerian semiotics that would take up questions of musical meaning directly.
See Dunsby and Stopford, The Case for a Schenkerian Semiotic, Music Theory Spectrum 3 (1981):
4953. More recently, Naomi Cumming has drawn on Schenker in developing a theory of musical subjectivity. See her The Sonic Self: Musical Subjectivity and Signification (Bloomington: Indiana
University Press, 2000). See also David Lidovs discussion of segmental hierarchies in Is Language
a Music? 104121. And there are other names (e.g., Alan Keiler, William Dougherty) that could be
added to this list. Nevertheless, it would be hard to support the contentionjudging from the writings of leading semioticians like Nattiez, Monelle, Tarasti, and Hattenthat a Schenkerian approach
is central to the current configuration of the field of musical semiotics.
114
PART I
Theory
to separate the study of (Fuxian) species counterpoint from the study of the music
itself. But although convenient in concept, the dichotomy proved to be hard to
sustain at a practical level. Schenker, no doubt in possession of a huge supplement
of information pertaining to free composition, drew regularly on this supplement
in making his analyses, but he did not always make explicit the source of this other
knowledge. Instead, he held fast to the more immediate theoretical challenge.
Strict counterpoint is a closed world, a world built on rules and prescriptions
designed to train students in the art of voice leading and therefore to prepare them
for better understanding of the music of the masters. In strict counterpoint, there
are, in principle, no Stufen; there is no harmonic motivation, no framework for
making big plans or thinking in large trajectories. All we have are consonances
and dissonances, voice leading, and specific linear and vertical dispositions of
intervals. There is no repetition, no motivic life, no danceonly idealized voices
following very local impulses and caring little for phraseology, outer form, or the
referential potency of tonal material. Free composition, by contrast, deals with the
actual work of art; it is open and promiscuous and admits all sorts of forces and
licenses. Unlike strict counterpoint, it relies on scale steps, possesses genuine harmonic content, incorporates repetition at many levels, and perpetuates diversity at
the foreground. The essential nature of strict counterpoint is its strictness, that of
free composition its freedom.5 This is why counterpoint must somehow be thoroughly separated from composition if the ideal and practical verities of both are to
be fully developed.6 Indeed, according to Schenker, the original and fundamental error made by previous theorists of counterpointincluding Bellerman and
Richter and even Fux, Albrechtsberger, and othersis the absolute and invariable
identification of counterpoint and theory of composition. We must never forget
that a huge chasm gapes between the exercises of counterpoint and the demands
of true composition.7
So much for the fanfare announcing the separation between strict counterpoint and free composition. If we now ask what the connection between the two
might be, distinctions become less categorical, their formulations more qualified
and poetic. Free composition is essentially a continuation of strict counterpoint,
writes Schenker in Tonwille.8 The phrase essentially a continuation implies that
these are separate but related or relatable domains. In Kontrapunkt, Schenker says
that, despite its so extensively altered appearances, free composition is mysteriously bound . . . as though by an umbilical cord, to strict counterpoint.9 Thus, at
a practical or analytical level, the dividing line between them is porous, perhaps
nonexistent. Their separation is a matter of principle.
Schenker used the suggestive metaphor bridges to free composition to suggest a
relationship between subsurface and surface, between background and foreground,
and ultimately between strict counterpoint and free composition. The discussion of
5.
6.
7.
8.
9.
CHAPTER 4
115
10. See also Eytan Agmon, The Bridges That Never Were: Schenker on the Contrapuntal Origin of the
Triad and the Seventh Chord, Music Theory Online 3 (1997).
116
PART I
Theory
While all of this sounds logical, there has been a gap in the analysis. How did we
get from level b to level c? Obviously, we did this by knowing the design of level c in
advance and inflecting the derivational process in level b to lead to it. But how do we
know what stylistic resources to use so that c comes out sounding like Handel and
not Couperin or Rameau? How did we invent the design of the musical surface?
Schenker does not dwell on these sorts of questions; indeed, he seems to discourage us from dwelling on these aspects of the foreground. We are enjoined,
instead, to grasp the coherence that is made possible by the background. But if the
possibilities for activating the progression shown at level a are not spelled out, if
they are consigned to the category of unspecified supplement, and if the analyst of
Handels suite is not already in possession of knowledge of a whole bunch of allemandes from the early eighteenth century, how is it possible to generate a specific
surface from a familiar and common background? Without, I hope, overstating the
point, I suggest that the journey from strict counterpoint (level a) to free composition (level c) makes an illicit orbettera mysterious leap as it approaches its
destination. I draw attention to this enticing mystery not to suggest a shortcoming
but to illustrate one consequence of this particular setting of theoretical limits.
11. Schenker, Free Composition, 3.
12. Schenker, Counterpoint, book 1, 59.
CHAPTER 4
117
In the 2 bars from Bachs C-sharp Minor Prelude from book 1 of the WellTempered Clavier in example 4.4, we hear a familiar progression in which the bass
moves by descending fifths while the upper voices enrich the progression with 76
suspensions. This model of counterpoint ostensibly enables Bachs free composition (level b). But here, too, there are no rules that would enable us to generate the
specific dance-like material that is Bachs actual music.
Example 4.4. J. S. Bach, Well-Tempered Clavier, book 1, Prelude in C-sharp Minor,
bars 2628 (cited in Schenker, Counterpoint, 337).
C# minor: (V-) I
VI
II
(I)
Level a of example 4.5 looks at first like a second species exercise, although its
incorporation of mixture (G-flat) takes it out of the realm of strict Fuxian species.
Whereas the demonstrations in the two previous examples approximate . . . form[s]
of strict counterpoint, thus placing the emphasis on a generative impulse, this
Example 4.5. Brahms, Handel Variations, op. 24, var. 23 (cited in Schenker,
Counterpoint, 192).
10
118
PART I
Theory
passage from Brahms is reduced to a clear two-voice counterpoint; the emphasis rests on a reductive impulse. Schenker points out that the real connection
between strict counterpoint and free composition can in general be discovered
only in reductions similar to [example 4.5].13 In other words, in the case of certain complex textures, the contrapuntal underpinning may lie further in the
background, requiring the analyst to postulate additional levels of explanatory
reduction. (Constructing the additional levels could be a valuable student exercise
in the case of example 4.5.) Whether the degree of reducibility is reflected in a
larger historical-chronological narrative or whether it conveys qualitative differences among compositions are issues left open by Schenker.
The mode of thinking enshrined in examples 4.3, 4.4, and 4.5 of hearing complex textures through simpler structures makes possible several valuable projects,
some of them historical, others systematic. We might envisage, for example, a history of musical composition based strictly on musical technique. This project was
already implicit in Felix Salzers 1952 book, Structural Hearing, but it has yet to
engender many follow-up studies.14 Imagine a version of the history of composition based on the function of a particular diminution, such as the passing note,
including leaping passing notes; or imagine a history of musical technique based
on neighbor-note configurations, or arpeggiations. Granted, these are not exactly
the sexiest research topics to which students are drawn nowadays, but they have
the potential to illuminate the internal dynamics of individual compositions and
to set into relief different composers manners. Without attempting to be comprehensive, I can nevertheless hint at the kinds of insights that might accrue from
such an approach by looking at a few examples of a single diminution: the neighbor-note.
CHAPTER 4
119
Example 4.6. Schubert, String Quintet in C Major, first movement, bars 16.
N
die
Hh
ne
krh
N
ten
f
(*)
(*)
Example 4.8 cites two similar 2-bar passages from the opening movement of
Mahlers Tenth. In the first bar of each, the content of the opening tonic chord (Fsharp major) is expanded through neighbor-note motion. Note that while the bass
is stationary in the first excerpt (as in the Schubert quintet cited in example 4.6), it
acquires its own lower neighbor in the second.
Example 4.8. Mahler, Symphony no. 10, first movement, (a) bars 1617, (b) bars
4950.
N
16
49
120
PART I
Theory
out Strauss as a composer who could compose neighboring notes conceived even
in four voices in a most masterful way18a rare compliment from a theorist who
was usually railing against Strauss and his contemporaries Mahler, Reger, and
Debussy.
Example 4.9. Richard Strauss, Till Eulenspiegels lustige Streiche (cited in Schenker,
Counterpoint, 192).
N
a
Example 4.10, also from Till, is a little more complex, so I have included a
speculative derivation to make explicit its origins from strict counterpoint. Start
at level a with a simple, diatonic neighbor-note progression; alter it chromatically while adding inner voices (level b); thenand here comes a more radical
stepdisplace the first element up an octave so that the entire neighbor-note
Example 4.10. Richard Strauss, Till Eulenspiegels lustige Streiche (level d cited in
Schenker, Counterpoint, 192).
N
a
N
b
N
c
d
mf
CHAPTER 4
121
configuration unfolds across two different registers (level c). Level d shows the
outcome. Note, again, that while one can connect Strausss theme to a diatonic
model as its umbilical cord, nothing in the generative process allows us to infer the
specific play of motive, rhythm, and articulation in the foreground.
Staying with Strauss for a moment longer, example 4.11 summarizes the chordal
motion in the first 6 bars of his beautiful Wiegenlied. Over a stationary bass (recall
the Schubert excerpt in example 4.6 and the first of the Mahler excerpts in example
4.8), the upper voices move to and from a common-tone (neighboring) diminishedseventh chord. Two other details add to the interest here: the expansion of the neighboring chord in bar 4 by means of an accented passing note (see C# in rh), the effect
of which is to enhance the dissonance value of the moment; and the similarity of
melodic configuration involving the notes ADCBA in both the accompaniment and the vocal melody, which produces a sort of motivic parallelism.
Example 4.11. Richard Strauss, Wiegenlied, bars 16.
N
a
Tru
me,
tru
me
Finally, in example 4.12, I quote two brief passages from the Credo of Stravinskys Mass, where, in spite of the extension of ideas of consonance and dissonance,
the morphology of neighbor-note formation is still evident.19
A history of musical technique based on the neighbor-note would not be confined to the very local levels just shown; it would encompass larger expanses of
music as well. Think, for example, of the slow movement of Schuberts String Quintet, whose F minor middle section neighbors the E major outer sections; or of his
Der Lindenbaum from Winterreise, where the agitated middle section prolongs
C-natural as an upper chromatic neighbor to the dominant, B, intensifying our
19. See Harald Krebs, The Unifying Function of Neighboring Motion in Stravinskys Sacre du
Printemps, Indiana Theory Review 8 (Spring 1987): 313; and Agawu, Stravinskys Mass and
Stravinsky Analysis, Music Theory Spectrum 11 (1989): 139163, for more on Stravinskys use of
neighbor-notes.
122
PART I
Theory
Example 4.12. Stravinsky, Mass, Credo, (a) bars 13, (b) bars 0000.
a
Tempo q = 72 (e = 144)
b
N
Discanti
Pa- trem om- ni - po- ten - tem,
Et in u - num
Chris - tum,
Et in u - num
Chris - tum,
Et in u - num
Chris - tum,
Et in u - num
Chris - tum,
Alti
Tenori
Bassi
Piano
desire for the tonic. Or think of Debussys prelude Bruyres, whose contrapuntal
structure, based on the succession of centers ABA, features an expanded
neighbor-note progression. Readers will have their own examples to add to the few
mentioned here. A comprehensive study would be illuminating.
Generative Analysis
Returning now to the white-note progression whose tonal tendencies we mentioned earlier, let us recast its elements as a set of ideal voices, an abstract model
that is subject to a variety of enactments (example 4.13). A compositional orientation allows us to imagine how this progression might be embellished or expanded
in order to enhance musical content. The act of embellishing, which is affined with
prolongation, is better conceptualized not as an additive process but as a divisive
one. An event is prolonged by means of certain techniques. How to delay reaching
CHAPTER 4
123
that final dominant? How to enhance the phenomenal feel of an initial tonic? How
to intensify desire for that cadence? This bottom-up mode of thinking casts the
analyst into a composerly role and urges a discovery of the world of tonal phenomena from the inside, so to speak. Analysis, in this understanding, is not a spectator sport.
Let us join in with the most elementary of moves. The progression in example
4.14 responds to a desire to embellish sonority 2 from example 4.13. This strengthens the sonority by means of a retroactive prolongation using a lower neighbornote, the bass note F. While the essence of the prolongation is that given in example
4.13, the form given in example 4.14 acquires greater content. Similarly, example
4.15 responds to the challenge of extending the functional domain of that same
middle element by arpeggiating down from the tonic to the predominant chord.
This represents an embellishment of a previous embellishment. Alternatively, we
may hear example 4.15 in terms of two neighbor-notes to the G, an upper (A) and
a lower (F), both prefixes. But if we think in terms of the ambitions of the initial
C, then it could be argued that the bass arpeggiation (CAF) embellishes C on
its way to the dominant G. These explanations begin to seem confused, but that is
precisely the point. That is, the domains of both a prospective prolongation of C
(by means of a suffix) and a retroactive prolongation of G (by means of a prefix)
overlap. Such is the nature of the continuity and internal dependence of the elements of tonal-functional material.
Example 4.14. Same with predominant sonority.
Alternative bass:
124
PART I
Theory
arpeggiation. This allows us to link up the first and second sonorities. This kind of
logic can be operationalized as pedagogy and taught to students. Indeed, it is what
happens in some sectors of jazz pedagogy. While it is not unheard of in the pedagogy associated with classical music, its centrality is not yet complete. Yet there
is clearly an advantage to forging cooperation between student and composer,
encouraging students to take responsibility as co-composers. Although such cooperation could blossom into bigger and more original acts of creativity, the modest
purpose here is simply to find ways of inhabiting a composition by speculatively
recreating some aspects of it.
Example 4.16 is yet another recomposition of our basic model. The steps
involved may be stated as follows:
a. State the basic progression.
b. Expand the middle element by means of a predominant sonority.
c. Extend the first element by means of voice exchange between treble and
bass in order to obtain a smooth, stepwise bass line.
d. Further expand the first sonority of the voice exchange by means of a
neighbor-note prolongation.
Example 4.16. Prolonging the archetypal progression.
a
CHAPTER 4
125
occasionally coincide with the compositional process, its validity does not rest
on such corroboration. Indeed, unlike the compositional process, which traces
an essentially biographical/diachronic/historical process, the logical procedure
advances a systematic fictional explanation. Music analysis is centrally concerned with such fictional procedures. As analysts, we trade in fictions. The better the fictional narrative, the more successful is the analysis. The more musical
the rules, the better is the analysis; the more musically plausible the generative
bases, the better is the analysis.
20. Simon Sechter, Analysis of the Finale of Mozarts Symphony no. [41] in C [K. 551 (Jupiter)],
in Music Analysis in the Nineteenth Century, vol. 1: Fugue, Form and Style, ed. Ian D. Bent (Cambridge: Cambridge University Press, 1994), 85.
126
PART I
Theory
Example 4.17. Sechters generative analysis of bars 5662 of the finale of Mozarts
Symphony in C (Jupiter).
or:
d
56
Fl.
Ob.
Fg.
Cor.
(C)
Trbe.
(C)
Timp.
Vl.
Vla.
Vc.
e B.
CHAPTER 4
127
Example 4.18. Beethovens Piano Sonata in C Major, op. 53 (Waldstein), bars 113,
with Czernys harmonic groundwork.
pp
10
21. Carl Czerny, Harmonic Groundwork of Beethovens Sonata [no. 21 in C] op. 53 (Waldstein),
quoted in Music Analysis in the Nineteenth Century, vol. 2, ed. Ian D. Bent (Cambridge: Cambridge
University Press, 1994), 188196.
128
PART I
Theory
groundwork with the original and thus learn something about harmonic construction andin the context of the groundwork for the entire movementthe
way in which ideas are ordered.
The progression quoted in example 4.19 is Erwin Ratzs summary of the harmonic basis of an entire compositionand a contrapuntal one at that: the first of
J. S. Bachs two-part inventions.22 As a synopsis, this progression lies sufficiently
close to the surface of the composition to be appreciated by even a first-time
listener. It can also serve as a horizon for appreciating the tonal tendency conveyed
by Bachs sixteenth-note figures. Beginning in the tonic, the invention tonicizes V,
then vi, before returning home via a circle of fifths.
Example 4.19. Ratzs harmonic summary of J. S. Bach, Two-Part Invention in C
Major.
1
I-V
ii - vi
circle of fifths
10
11
12
cadence
In generative terms, the models of tonal motion used by Bach include two
kinds of cadence (a perfect cadence and an imperfect cadence) and a circle-offifths progression. If we number the sonorities in Ratzs synopsis as 112, we can
define their functions as follows. The 1112 sequence is a perfect cadence in the
home key, 46 is also a perfect cadence but on vi (4 being the predominant sonority, 5 the dominant, and 6 the concluding tonic), 12 forms a half cadence (an
open gesture), and 710 constitutes a circle-of-fifths progression, ADGC.
The journey from the white-note level to a black-note one may be illustrated
with respect to bars 1518 of Bachs invention (example 4.20). At level a is the bare
linear intervallic pattern of descending bass fifths in a two-voice representation. At
level b, passing notes at the half-note level provide melodic continuity. Rhythmic
interest is introduced at level c with the 43 suspensions in the second and fourth
bars. And from here to Bachs music (level d) seems inevitable, even without us
being able to predict the exact nature of the figuration.
At an even more remote level of structure lies Toveys harmonic summary
of the first movement of Schuberts C Major Quintet (example 4.21).23 He is not
concerned with the generative steps leading from this postulated background to
Schuberts multifaceted surface, but with a synopsis that incorporates a hierarchy
reflected in the durations of individual triads. The longer notes are the focal points,
the shorter ones are connexion links. Although Tovey elsewhere understands and
employs notions akin to Schenkerian prolongation, he is not, it appears, concerned
with establishing the prolongational basis of the movement as such, nor with
22. Erwin Ratz, Einfhrung in die musikalische Formenlehre, 3rd ed. (Vienna: Universal, 1973), 55.
23. Donald Francis Tovey, Essays and Lectures on Music (Oxford: Oxford University Press, 1949), 150.
CHAPTER 4
129
exploring the prolongational potential of this progression. Still, the idea is suggestive and shares with notions of generation the same procreative potential.24
c
4
- 3
15
Recapitulation
bIII
bVI
Prolonged Counterpoint
The larger enterprise in which Sechter, Czerny, Ratz, Tovey, and numerous other
theorists were engaged is the speculative construction of tonal meaning drawing on the fundamental idea that understanding always entails understanding in
terms of, and that those terms are themselves musical. Widespread and diffuse,
these collective practices have been given different names, pursued in the context
of different genres of theory making, and illustrated by different musical styles and
composers. Already in this chapter, we have spoken of diminutions, prolongation,
24. In Schenker and the Theoretical Tradition, College Music Symposium 18 (1978): 7296, Robert
Morgan traces aspects of Schenkerian thinking in musical treatises from the Renaissance on. Finding such traces or pointing to affinities with other theoretical traditions is not meant to mute the
force of Schenkers originality.
130
PART I
Theory
the relationship between strict counterpoint and free composition, harmonic summaries and the expansion of musical content, and generating a complex texture
from simpler premises. Can all of these analytical adventures be brought together
under a single umbrella? Although it is obviously desirable to stabilize terminology
in order to increase efficiency in communication, the fact, first, that the core idea
of this chapter is shared by many musicians and music theorists, and second, that
it can be and has been pursued from a variety of angles, is a sign of the potency of
the idea. We should welcome and even celebrate this kind of plurality, not retreat
from it as a sign of confusion.
For our purposes, then, the choice of a single rubric is in part an arbitrary
and convenient gesture. I do so to organize the remaining analyses in this chapter, which, like previous analyses, will involve the construction of bridges from
background to foreground within the limitations noted earlier. The term prolonged
counterpoint is borrowed directly from Felix Salzer and Carl Schachters Counterpoint in Composition.25 Distinguishing between elementary counterpoint and
prolonged counterpoint, they understand the latter in terms of the development
and expansion of fundamental principles, specifically, as the significant and fascinating artistic elaboration of basic ideas of musical continuity and coherence.
Salzer and Schachter devote several chapters to the direct application of species
counterpoint in composition and then conclude by studying counterpoint in composition in a historical framework, going from Binchois to Scriabin.
More formally, we say that an element is prolonged when it extends its functional domain across a specified temporal unit. The prolonged entity controls,
stands for, is the origin of, or represents other entities. (The historically rooted
idea of diminution is relevant here.) Prolongation makes motion possible, and the
prolonged entity emergesnecessarilyas the hierarchically superior member of
a specific structural segment. And by counterpoint, we mean a musical procedure
by which two (or more) interdependent lines are coordinated (within the conventions of consonance-dissonance and rhythm) in a way that produces a third,
meaningful musical idea.
One important departure from Salzer and Schachter concerns the place of harmony in the generative process. Although the proto-structures we have isolated
so far are well-formed from a contrapuntal point of view, they are regarded as
harmonic structures as well. The harmonic impetus is regarded here as fundamental, and especially so in relation to the main repertoires studied here. To say
this is to admit that we have conflated ideas of counterpoint and harmony in these
analyses. Prolonged harmony/counterpoint would be a more accurate rubric, but
because counterpoint always entails harmony (at least in the repertoires studied in
this book), and harmony always entails counterpoint, this more literal designation
would carry some redundancy.
The basic generative method involves the construction of an underlying structure as a supplement or potential replacement (Derridas sense of supplement)
25. Felix Salzer and Carl Schachter, Counterpoint in Composition (New York: Columbia University
Press, 1989), xiv.
CHAPTER 4
131
for the more fully elaborated target surface. We might speak of logically prior
structures, of surface and subsurface elements, of immediate as opposed to remote
structures, and of body as distinct from dress. Analysis involves the reconstruction of prototypes, underlying structures, or contrapuntal origins of a given composition or passage. In some of the examples, I have been concerned to reveal
the generative steps explicitly. Of course, these subsurface structures are fictional
structures invented for a specific heuristic purpose, namely, to set the ruling surface into relief by giving us a speculative glimpse into its putative origins; these
rational reconstructions act as a foil against which we can begin to understand the
individual features of the composition. The aim of a generative analysis is not to
reproduce a known method of composition (based on historical or biographical
reconstruction) but to draw on fabricated structures in order to enable the analyst
to make imaginative projections about a works conditions of possibility.
Finally, let me rehearse just three of the advantages of the approach. First, by
engaging in acts of summary, synopsis, or paraphrase, we are brought into close
encounter with a composition; we are compelled to, as it were, speak music as a
language. Second, the reduced structures will encourage a fresh conceptualization
of form. (This will become clearer in the larger analyses in part 2 of this book.)
How do the little (white-note) progressions, contrapuntal structures, or building
blocks succeed one another? I believe that answers to this question will lead us to a
more complex view of form than the jelly-mold theories canonized in any number
of textbooks. Third, by comparing treatments of the same or similar contrapuntal
procedures or figures across pieces, we become even more aware of the specific
elements of design that distinguish one composer from another, one work from
another, or even one passage from another.
The purpose of an analysis is to establish the conditions of possibility for a
given composition. Although other dimensions can support such an exercise, harmony and voice leading seem to be the most important. The analytical method
consists of inspecting all of the harmonic events and explaining them as instances
of cadential or prolongational motion (prolongation here includes linear intervallic patterns). Cadences and prolongations are subject to varying degrees of disguise, so an essential part of the analysis is to show how a simple, diatonic model
is creatively embellished by the composer. The possible models are relatively few
and are postulated by the analyst for a given passage. Then follows the speculative
task of demonstrating progressive variations of those models. Varying or enriching these simple models encourages the analyst to play with simple harmonic
and contrapuntal procedures, procedures that are grammatically consistent with
the style in question. While the analysis aims at explaining the whole composition from this perspective, it is not committed to the unique ordering of models
within a particular whole. (This last point becomes an issue at a further stage of the
analysis.) The whole here is succession, not progression; the whole is a composite
assembled additively. Analytical labor is devoted to identifying those elements
cadences and prolongational spansthat constitute the composite. By focusing
on the things that went into the compositions, the arsenal of devices, we prepare
for a fuller exploration of the work as a musical work, not as a musical work. We
establish what made it possible, leaving individual listeners to tell whatever story
132
PART I
Theory
they wish to tell about the finished work. Following this method is akin to navigating the work in search of its enabling prolongations and cadences. Once these are
discovered, the analytical task is fulfilled. Obviously, there will always be more to
say about the work. Works endure, and as along as there are listeners and institutions, there will be additional insights about the most familiar works. But shifting
attention from the work as such to its grammatical constituents may reward analysts, who are not passive consumers of scores but active musicians who wish to
engage with musical works from the inside. This kind of analysis rewrites the work
as a series of speculative projections; it establishes the conditions of possibility for
a given work.
(3)
Between these two poles of beginning (bars 14) and ending (bars 1316) is
a middle comprising a dominant prolongationour third technique.26 The first
phase of this prolongation is heard in bars 58, where the dominant is prolonged
retroactively by upper and lower neighbors in the bass (E and C, supporting vi and
ii6 chords), as shown at level a. Level b expands this progression by employing a
secondary dominant to enhance the move to vi and incorporating an appoggiatura
to ii6. Finally, level c incorporates this progression into Strausss 4-bar passage.
Bars 912 are essentially a continuation of the dominant prolongation of bars
58. They are harmonically sequential, as suggested in the two-voice representation at level a. Level b enriches this progression with the usual passing notes and
appoggiaturas, and level c restores the metrical context.
26. Although the literal span of the V prolongation is bars 515, my interest here is in the rhetoric of
prolongational expression, so I will maintain reference to the 4-bar segments previously isolated.
CHAPTER 4
133
(continued)
134
PART I
Theory
13
of the text in a kind of declamatory or speech mode, the first violins memorable
song-mode melody, the orchestration, and the willful playing with time.) By establishing some connection with tonal norms, we can, I believe, appreciate better the
nature of Strausss creativity.
CHAPTER 4
135
Schubert, Dass sie hier gewesen, op. 59, no. 2, bars 118
Example 4.23 develops a similar generative account of bars 118 of Schuberts
song Dass sie hier gewesen. We start at level a with a simple auxiliary cadence,
ii6VI. An idiomatic progression, this guiding model comprises a predominant,
dominant, tonic succession. At level b, the predominant is prefaced by its own
dominant-functioning chord, a diminished-seventh in 6/5 position, introduced to
intensify the move to the predominant. At level c, this same progression is enriched
by neighbor-notes and passing notes and then expanded: it is disposed first incompletely (bars 18), then completely (bars 916). Schubert (or the compositional
persona) in turn extends the temporal reach of the opening dissonance across 4
Example 4.23. Bridges to free composition in Schuberts Dass sie hier gewesen,
op. 59, no. 2, bars 118.
a
5 - 6
1-4
7 - 8
9-11
12
13 - 14 15-16
Sehr langsam
Da der
Ost- wind
Df - te
hau
chet in die
Lf
pp
pp
10
tut
er
kund
da du hier ge - we -
sen,
da du hier ge - we - sen.
te,
da - durch
136
PART I
Theory
bars, incorporates appoggiaturas to the dissonance, and places the initial statements in a high register on the piano.
The chronology favored by Wintle privileges a compositional rather than an analytic or synthetic approach. He concentrates on the workbench methods of the
composer by isolating models and showing how they are composed out. These
fictional texts are meaningful units that exist in a dialectical relationship with the
segments of actual music that they model. Each is syntactically coherent but has
minimum rhetorical content. In Corelli (unlike in the Strauss and Schubert works
discussed previously), both the model and its variants may occur in the work.
The opening Grave movement of the Church Sonata, op. 3, no. 1 (shown in its
entirety in example 4.24), will serve as our demonstration piece. Example 4.25 displays the generating model at level a as a straightforward, closed progression. Following the model are a number of variants that are referable to the model. At level
b, a I6 chord substitutes for the initial root-position chord and effectively increases
27. Christopher Wintle,), 31. Methodologically similar is William Rothsteins article Transformations of Cadential Formulae in Music by Corelli and His Successors, in Studies from the Third
International Schenker Symposium, ed. Allen Cadwallader (Hildersheim, Germany: Olms, 2006).
CHAPTER 4
137
the mobility in the approach to the cadence. (Cadential groups originating from
Violino II.
Violone,
e Organo.
6 6
65
8 6
6
5
5
4
6
5
6 9
5
6
5
86
6
5
5
4
43 6
6
5
5
4
6
5
6
5
86
6
5
5
4
13
5 3
4
6 5
7 5
5
4
6 6
5 6 6
5
5
4 3
7 6 5
3 4 4 3
I have divided the movement into 13 units or building blocks, some of them
overlapping:
1.
2.
3.
4.
5.
6.
7.
Bars 122
Bars 2344
Bars 564
Bars 6473
Bars 7483
Bars 8494
Bars 94104
8.
9.
10.
11.
12.
13.
Bars 104114
Bars 114123
Bars 13141
Bars 142152
Bars 152171
Bars 172194
138
PART I
Theory
or
CHAPTER 4
139
6 5
6
5
4 3
truncated versions; units 5 and 8 and 9, also truncated, are heard in the dominant
(5) and the relative minor (8 and 9). Unit 11 substitutes a beginning on V for the
models I, while unit 4 closes deceptively, substituting a local vi for I (in C). These
units express the basic model in the following order of conceptual complexity:
2
12
13
6
7
5
9
8
11
4
Fully 10 of the works 13 units are variants of this basic model. What about the
rest? The remaining three are based on a tonally open progression shown at level
140
Theory
PART I
e of example 4.25. That model is almost identical to the basic model, except that
instead of closing, it reaches only the penultimate dominantit is interrupted, in
2 span, not the 5
1 of the
other words. In structural-melodic terms, it unfolds a 5
unit 2
from
6 5
86
6
5
4 3
16
from
unit 12
6 6
5 6 6
5
5
4 3
18
from
unit 13
7
3
6
4
5
4 3
from
unit 6
6
5
(continued)
10
unit 7
from
6
5
unit 5
from
6
5
5
4
or
from
6
5
5
4
12
unit 9
from
6
5
5
4
or
12
from
6
5
5
4
(continued)
11
from
unit 8
6
5
6
5
or
11
from
6
5
6
5
from
unit 4
6
5
5
4
or
from
6
5
5
4
15
from
unit 11
5
4
(continued)
CHAPTER 4
143
from
unit 1
13
from
unit 10
5
4
from
unit 3
5
4
6
5
basic model. This second model is heard as unit 1, that is, right at the outset of the
work. It is also heard as unit 10 and finally as unit 3 (in conceptual order).
With this demonstration of the affinities and affiliations between the two models and all of the segments of Corellis Trio Sonata, op. 3, no. 1/i, the purpose of this
restricted analysisto establish the conditions of possibility for the movement
has been served. This does not mean that there is nothing more to say about the
work. Indeed, as we will see when we turn to the paradigmatic approach in the next
chapter, several interesting questions and issues are raised by this kind of analysis.
For example, what kind of narrative is enshrined in the piece-specific succession of
units? I have largely ignored this issue, concentrating instead on conceptual order
rather than chronology. But if we think back to the conceptual succession of units
2, 12, and 13, the units that lie closest to the basic model, then it is clear that the
strongest expressions of that model lie near the beginning of the work and at the
end. And the fact that the ending features a twofold reiteration of the model (units
12 and 13) may reinforce some of our intuitions about the function of closure.
We may also wish to pursue the matter of dispositio (form), stimulated in part
by Laurence Dreyfuss analysis of the first of Bachs two-part inventions (whose
144
PART I
Theory
28. Laurence Dreyfus, Bach and the Patterns of Invention (Cambridge, MA: Harvard University Press,
1996), 132.
CHAPTER 4
145
then a two-voice version, then a version enriched with passing notes but missing
two steps of the model (EA), and finally Bachs music (1722).
The third pattern used by Bach is a bass pattern proceeding from I to V in
7
6
5 motion. The first two lines of example 4.30 display the pattern in
stepwise 8
6 or 6
5
becomes
or
13
becomes
or
expressed by Bach as
22
146
PART I
Theory
thirds and tenths, respectively. Then, the straight tenths of line 2 are enlivened by
a pair of 76 suspensions in the middle 2 bars (line 3). From here, it is but a short
step to Bachs first 4 bars, which include a 43 suspension in the fourth bar (line 4).
The 4-bar passage is repeated immediately (line 5).
These three patterns are, of course, ordinary patterns in eighteenth-century
7
6
5 bass pattern, familiar to many from Bachs Goldberg Variamusic. The 8
tions, reaches back into the seventeenth century, during which it functioned
as an emblem of lament.29 It was also subject to various forms of enrichment,
Example 4.29. Circle of fifths as model.
A circle-of-fifths progression
etc
may be expressed as
or as
and by Bach as
A circle-of-fifths progression
may be rendered as
and by Bach as
17
29. Ellen Rosand, The Descending Tetrachord: An Emblem of Lament, Musical Quarterly 65 (1979):
346359.
CHAPTER 4
147
including the incorporation of chromatic steps. It functions here as a beginning, an opening out. Bach claims this space, this trajectory, not by inventing a
new bass pattern, but by expressing the familiar within a refreshed framework.
For example, he treats the medium of solo cello as both melodic and harmonic,
and thus incorporates within a compound melodic texture both conjunct and
disjunct lines. Apparent disjunction on the musical surface is rationalized by
the deeper-lying harmonic/voice-leading patterns revealed here. Bachs ingenious designs owe not a little to the security of his harmonic thinking. In bars
14, for example, the first and third notes of the GFED bass motion appear
as eighth-notes in metrically weak positions, but this in no way alters the harmonic meaning of the phrase. Similarly, the B-natural at the beginning of bar 17
resolves to a C an octave higher in the middle of the next bar, but the registral
7
6
5 bass pattern as model.
Example 4.30. The 8
^ ^ ^ ^
8 - 7 - 6 - 5 bass pattern
expressed as
enriched as
composed by Bach as
1
and repeated as
148
PART I
Theory
a speculative play with their elements. Again, the matter of final chronology
how these three models are ordered in this particular pieceis, at this stage
of the analysis, less important than simply identifying the model. If we think
of Bach as improviser, then part of our task is to understand the language of
improvisation, and this in turn consists in identifying tricks, licks, clichs, and
conventional moves. How these are ordered on a specific occasion may not ultimately matter to those who view compositions as frozen improvisations, and
to those who often allow themselves to imagine alternatives to what Bach does
here or there. On the other hand, those who are fixated on scores, who cling
to the absolute identity of a composition, who interpret its ontology literally
and strictly, and who refuse the idea of open texts will find the compositional
approach unsatisfactory or intimidating. Indeed, emphasis on the formulaic
places Bach in some great company: that of African dirge and epic singers who
similarly depend on clichs and, inevitably, of jazz musicians like Art Tatum
and Charlie Parker.
A few preliminary comments about style need to be entered here. Obviously,
Corelli and Bach utilize similarly simple models in the two compositions at which
we have looked. For example, both invest in the cadential approach from I6. Yet
there is a distinct difference between the Corelli sound and the Bach sound. How
might we specify this? One source of difference lies in the distance between model
and composition. Simply put, in Corelli, the models lie relatively close to the surface; sometimes, they constitute that surface, while at other times they can be modified by the merest of touchesan embellishment here and thereto produce that
surface. In Bach, the relationship between model (as a historical object) and composition is more varied. Some models are highly disguised while a few are hardly
hidden. The more disguised ones evince a contrapuntal depth that is not normally
found in Corelli. Bachs music is therefore heterogeneous in the way that it negotiates the surface-depth dialectic, whereas Corellis is relatively homogeneous. It is
possible that this distinction lies behind conventional perceptions of Bach as the
greater of the two composers.
CHAPTER 4
149
30. Joel Lester, J. S. Bach Teaches Us How to Compose: Four Pattern Preludes of the Well-Tempered
Clavier, College Music Symposium 38 (1998): 3346.
reinterpreted as
expressed by Bach as
and as
32
33
34
35
and as
15
16
17
18
19
10
11
10
11
and as
or
(continued)
CHAPTER 4
151
expanded into
transformed into
expressed by Bach as
12
13
14
15
(continued)
31. Allen Forte and Steven Gilbert, Introduction to Schenkerian Analysis (New York: Norton, 1982),
191192.
152
PART I
Theory
I - V Model
a'
enriched as
b'
further as
c'
enlivened by Bach as
19
20
21
22
23
24
CHAPTER 4
153
expressed by Bach as
24
25
26
27
revoiced as
enriched by Bach as
27
28
29
30
31
It should be immediately obvious that the six models, while idiomatically different, are structurally related. Indeed, models 1, 3, 4, 5, and 6 express the same kind of
harmonic sense. Only model 2 differs in its harmonic gesture. From this perspective,
the prelude goes over the same harmonic ground five out of six times. And yet, the
dynamic trajectory of the prelude does not suggest circularity; indeed, the melodic
line is shaped linearly in the form of a dynamic curve. Both features are in the prelude, suggesting that its form enshrines contradictory tendencies. We will see other
examples of the interplay between the circular and the linear in the next chapter.
Mahler, Das Lied von der Erde, Der Abschied, bars 8195
The guiding model for this brief excerpt from Das Lied (example 4.33) is the same
basic iiVI progression we encountered in Schuberts Dass sie hier gewesen.
Shown at level a in example 4.34, the progression is end-oriented in the sense that
only at the passages conclusion does clarity emerge. While Schuberts interpretation
Example 4.32. Bridges to free composition in Chopin, Prelude in C Major, op. 28,
no. 1.
Model 1
a
Model 2
a
Model 3
a
10
11
12
(continued)
CHAPTER 4
Model 5
a
Model 6
a
155
156
Theory
PART I
of the model allows the listener to predict an outcome (both immediately and in
the long term) at every moment, Mahlers inscription erases some of the larger
predictive tendency enshrined in the progression while intensifying other, more
local aspects. Of the three functional chords, the predictive capability of ii is the
most limited; only in retrospect are we able to interpret ii as a predominant chord.
The V chord, on the other hand, is highly charged at certain moments (especially
in bars 86 and 92), so it is possible to hear its potential destination as I/I or vi or
VI. And the closing I/i moment attains temporary stability as the resolution of
the previous V, but this stability is quickly dissipated as the movement moves on to
other tonal and thematic goals.
A possible bridge from the guiding model to Mahlers music is indicated at
level b of example 4.34. The white notes in the bassD, G, G, and Care of course
the roots of our iiVI/i progression. The enrichment in the upper voices, however,
Example 4.33. Mahler, Das Lied von der Erde, Der Abschied, bars 8197.
un poco pi mosso
11
sf
sf
etwas drngend
pressing on
cresc.
3
sf
87
r.h.
Pesante
12
sf
3
r.h.
sf
l.h.
sf
poco rit.
sf
92
cresc.
r.h.
96
morendo
a tempo
p3
3
3
CHAPTER 4
157
disguises the progression in more complex ways than we have seen so far, calling for a more deliberate explication. Example 4.35 offers a detailed voice-leading
graph that will enable us to examine in a bit more detail the contrapuntal means
with which Mahler sustains the guiding progression.32
Example 4.34. Bridges to free composition in Mahler, Das Lied von der Erde, Der
Abschied, bars 8195.
81
83
84
86
87
88
89
90
91
93 - 95
87
84
ii (4
3)
b7
V7
10
10
10
b7
vi #6
5
93
91
95
()
#5
b7
9
Vb9
7
#5
4
b7
8
8
(8)
b3
32. For further discussion of techniques of prolonged counterpoint in Mahler, see John Williamson,
Dissonance and Middleground Prolongation in Mahlers Later Music, in Mahler Studies, ed.
Stephen Hefling (Cambridge: Cambridge University Press, 1997), 248270; and Agawu, Prolonged
Counterpoint in Mahler, also in Mahler Studies, 217247.
158
PART I
Theory
N
2
4
3
4
4
4
CHAPTER 4
159
160
PART I
Theory
Conclusion
We have been exploring the idea of bridges to free composition through several
stylistic contexts, including Corelli, J. S. Bach, Schubert, Chopin, Mahler, and
Richard Strauss. These bridges connect the structures of an actual composition
33. Donald Mitchell, Gustav Mahler, vol. 3: Songs and Symphonies of Life and Death (London: Faber
and Faber, 1985), 344.
CHAPTER 4
161
with a set of models or proto-structures. In Schenkers formulation, these protostructures stem from the world of strict counterpoint. In pursuing the Schenkerian
idea, however, I have incorporated harmony into the models in the belief that
harmonic motivation is originary in all of the examples I have discussed. Although
we retained some skepticism about whether the bridges are ultimately secureand
this because they privilege harmony and its contrapuntal expressionwe agreed
that the idea of bridges, an idea that links a complex compositional present to
its reconstructed and simplified past, is a potent and indispensable tool for tonal
understanding.
As always with analysis, value accrues from practice, and so I have encouraged
readers to play through the examples in this chapter and observe the transformational processes. While it is possible to summarize the results of the analysesby,
for example, describing the Ursatz-like guiding ideas, their elaboration by use of
functional predominants, the role of diminutions in bringing proto-structures to
life, and the differences as well as similarities among composerly mannersit is
not necessary. A more desirable outcome would be that, having played through a
number of the constructions presented here, the student is stimulated to reciprocal
acts of reconstruction.
Still, a few issues pertaining to the general approach and its connection to the
subject of this book need to be aired. First, the generative approach, by presenting compositions as prolonged counterpoint, brings us close to speaking music
as a language. That the ability to do so is most desirable for the music analyst is,
to my mind, beyond dispute. In regard to practice, because the generative posture encourages active transformation, it is more productive than the reductive
posture, which maintains the object status of the musical work and succumbs
to instruction, to rules and regulations, and to narrow criteria of correctness.
Second and related, because the generative method is not locked into certain
pathsalternative bridges are conceivablethe analyst is called upon to exercise
improvisatory license in speculating about compositional origins. This flexibility
may not be to everyones taste; it certainly will not be to the taste of students
who take a literal view of the process. But it seems to me that whatever we can
do nowadays to foster a measure of rigorous speculation in and through music
is desirable. What such speculation guarantees may be no more than the kinds
of edification that we associate with performing, but such benefits are a crucial
supplement to word-based analytical systems, which do not always lead us to the
music itself. Third, approaching tonal composition in general and the Romantic
repertoire in particular in this way has the potential to illuminate the style of
individual composers. We glimpsed some of this in the different ways in which
diminutions are manipulated by Corelli and Bach and by Strauss and Mahler.
Fourth and finally, working with notes in this way, embellishing progressions to
yield others and playfully reconstructing a composers work, is an activity that
begins to acquire a hermetic feel. Although it can be aligned morphologically
with other analytical systems, it refuses graceful co-optation into other systems.
The fact that this is a peculiar kind of doingand not a search for facts or for
traces of the socialplaces the rationale in the realm of advantages that accrue
162
PART I
Theory
from hands-on playing. There is, then, a kind of autonomy in this kind of doing
that may not sit well with authorities who demand summarizable, positivistic
results of analysis.
If music is a language, then speaking it (fluently) is essential to understanding its production processes. It has been my task in this chapter to provide some
indication of how this might work. I will continue in this vein in the next chapter,
where I take up more directly matters of repetition.
C HA P T E R
Five
Paradigmatic Analysis
Introduction
We began our exploration of music as discourse by noting similarities and differences between music and language (chapter 1). Six criteria were then put forward
for the analysis of Romantic music based on salient, readily understandable features; these features influence individual conceptualizations of Romantic music as
meaningful, as language (chapters 2 and 3). In the fourth chapter, I took a more
interested route to musical understanding by probing this music from the inside, so
to speak. I hypothesized compositional origins as simple models that lie behind or
in the background of more complex surface configurations. This exercise brought
into view the challenge of speaking music as a language. Although no attempt
was made to establish a set of overarching, stable, context-invariant meanings, the
meaningfulness of the generative activity was noted. Meaning is doing, and doing
in that case entailed the elaboration of models, proto-structures, or prototypes.
In this chapterthe last of our theoretical chapters, the remaining four being
case studieswe will explore an approach that has already been adumbrated in
previous analyses, namely, the paradigmatic approach. Whereas the search for
bridges from strict counterpoint to free composition (chapter 4) depends on the
analysts familiarity with basic idioms of tonal composition, the paradigmatic
approach, in principle, minimizesbut by no means eliminatesdependency on
such knowledge in order to engender a less mediated view of the composition.
Thinking in terms of paradigms and syntagms essentially means thinking in terms
of repetition and succession. Under this regime, a composition is understood as a
succession of events (or units or segments) that are repeated, sometimes exactly,
other times inexactly. The associations between events and the nature of their succession guides our mode of meaning construction.
The terms paradigm and syntagm may feel unwieldy to some readers, but since
they possess some currency in certain corners of music-semiotic researchand
linguistic researchand since their connotations are readily specified, I will retain
163
164
PART I
Theory
them in this context too. A paradigm denotes a class of equivalentand therefore interchangeableobjects. A syntagm denotes a chain, a succession of objects
forming a linear sequence. For example, the harmonic progression Iii6VI constitutes a musical syntagm. Each member of the progression represents a class of
chords, and members of a class may substitute for one another. Instead of ii6, for
example, I may prefer IV or ii6/5, the assumption being that all three chords are
equivalent (in harmonic-functional terms and, presumably, also morphologically);
therefore, from a syntactic point of view, substituting one chord for another member of its class does not alter the meaning of the progression. (One should, however, not underestimate the impact in effect, affect, and semantic meaning that
such substitution engenders.)
Stated so simply, one can readily conclude that theorists and analysts have long
worked with implicit notions of paradigm and syntagm, even while applying different terminology. We have been semioticians all along! For example, the aspect
of Hugo Riemanns harmonic theory that understands harmonic function in terms
of three foundational chordal classesa tonic function, a dominant function, and
a subdominant functionfosters a paradigmatic approach to analysis. Or think of
Roland Jacksons essay on the Prelude to Tristan and Isolde, which includes a summary of the preludes leitmotivic content (see example 5.1). The six categories listed
across the top of the chartgrief and desire, glance, love philter, death, magic casket,
and deliverance-by-deathname thematic classes, while the inclusive bar numbers
listed in each column identify the spread of individual leitmotivs. Thus, glance
occurs in five passages spanning the prelude, while grief and desire occurs at the
beginning, at the climax, and at the end. The occurrences of each leitmotivic class
are directly and materially related, not based on an abstraction. While Jackson does
not describe this as a paradigmatic analysis, it is one for all intents and purposes.1
Example 5.1. Jacksons analysis of leitmotivic content in the Tristan Prelude.
Grief and
Love
Glance
Death
Desire
Philtre
1
17 17
24 25
28 28
32
32
36
36
45
55
63
74
83
(T chd.)
83
89
(& Glance)
48 48
Deliveranceby-Death
44
*A
54
63
74
(& Desire *F)
89
94
101
Magic
Casket
94
*F
100
106
1. Roland Jackson, Leitmotive and Form in the Tristan Prelude, Music Review 36 (1975): 4253.
CHAPTER 5
Paradigmatic Analysis
165
9
2
B
12
Transition
Piano
anticipation
Piano
anticipation
7
4
(+8)
10
(+8)
(+8)
Another example may be cited from Edward T. Cones analysis of the first
movement of Stravinskys Symphony of Psalms (example 5.2). Cone isolates recurring blocks of material and sets them out in the form of strata. This process of
stratification, well supported by what is known of Stravinskys working methods, enables the analyst to capture difference, affinity, continuity, and discontinuity among the works blocks of material. Unlike Johnsons leitmotifs, Cones
paradigms are presented linearly rather than vertically. Thus, stratum A in Cones
diagram, which consists of the recurring E minor Psalms chord, is displayed
horizontally as one of four paradigmatic classes (labeled A, X, B, and C, respectively). Cone explains and justifies the basic criteria for his interpretation. Without entering into the details, we can see at a glance the workings of a paradigmatic
impulse.
The connotations of paradigm and syntagm are numerous, and not all writers
take special care to differentiate them. Paradigm is affiliated with model, exemplar,
archetype, template, typical instance, precedent, and so on, while syntagm is affiliated with ordering, disposing, placing things together, arranging things in sequence,
combining units, and linearity.2 My own intent here is not to narrow the semantic
field down to a specific technical usage, but to retain a broad sense of both terms
in order to convey the considerable extent to which traditional music analysis, by
investing in notions of repetition and the association between repeated units, has
always drawn implicitly on the descriptive domains of paradigm and syntagm.
In the practice of twentieth-century music analysis, the paradigmatic method
is associated with musical semiologists and has been set out didactically in various
writings by Ruwet, Lidov, Nattiez, Dunsby, Monelle, and Ayrey, among others.3 Its
most noted applicationsas a methodhave been to repertoires the premises of
2. See Oxford English Dictionary, 3rd ed. (Oxford: Oxford University Press, 2007).
3. For a helpful orientation to the field of musical semiotics up to the early 1990s, see Monelle, Linguistics and Semiotics in Music. Among more recent writings in English, the following two volumes
provide some indication of the range of activity in the field: Musical Semiotics in Growth, ed. Eero
Tarasti (Bloomington: Indiana University Press, 1996); and Musical Semiotics Revisited, ed. Eero
Tarasti (Helsinki: International Semiotics Institute, 2003).
166
PART I
Theory
whose languages have not been fully stabilized. Ruwet, for example, based one of
his demonstrations on medieval monodies, while Nattiez chose solo flute pieces
by Debussy and Varse for extensive exegesis. Craig Ayrey and Marcel Guerstin
have pursued the Debussy repertoire furthera sign, perhaps, that this particular language or idiolect presents peculiar analytical challenges, being neither
straightforwardly tonal (like the chromatically enriched languages of Brahms,
Mahler, or Wolf) nor decidedly atonal (like Webern). A handful of attempts
aside,4 the paradigmatic method has not been applied extensively to eighteenthand nineteenth-century music. The reason, presumably, is that these collective
repertoires come freighted with so much conventional meaning that analysis that
ignores such freighteven as a foilwould seem impoverished from the start.
By privileging repetition and its associations, the paradigmatic method fosters a less knowing stance in analysis; it encourages us to adopt a strategic navet
and to downplaywithout pretending to be able to eliminate entirelysome of
the a priori concerns that one normally brings to the task. The questions immediately arise: which concerns should we pretend to forget, which understandings should we (temporarily) unlearn, and of which features should we feign
ignorance? Answers to these questions are to some extent a matter of choice
and context and may involve the basic parameters of tonal music: rhythm, timbre, melodic shape, form, and harmonic succession. In seeking to promote an
understanding of music as discourse, I am especially interested in dispensing
temporarilywith aspects of conventional form. I want to place at a distance
notions such as sonata form, rondo form, binary and ternary forms, and reckon
insteador at least initiallywith the traces left by the work of repetition. It is
true that different analysts use different signifiers to establish a works fidelity
to a particular formal template, and so the pertinence of the category form will
vary from one analytical context to the next. Nevertheless, by denying an a priori
privilege to such conventional shapes, the analyst may well hear (familiar) works
freshly. For, despite repeated claims that musical forms are not fixed but flexible,
that they are not molds into which material is poured but shapes resulting from
the tendency of the material (Tovey and Rosen often make these points), many
students still treat sonata and rondo forms as prescribed, as possessing certain
distinctive features that must be unearthed in analysis. When such features are
found, the work is regarded as normal; if the expected features are not there,
or if they are somewhat disguised, a deformation is said to have occurred.5 A
4. See, for example, Patrick McCreless, Syntagmatics and Paradigmatics: Some Implications for the
Analysis of Chromaticism in Tonal Music, Music Theory Spectrum 13 (1991): 147178; and Craig
Ayrey, Universe of Particulars: Subotnik, Deconstruction, and Chopin, Music Analysis 17 (1998):
339381.
5. I use the word deformation advisedly. See the comprehensive new sonata theory by James
Hepokoski and Warren Darcy, Elements of Sonata Theory: Norms, Types, and Deformations in the
Late-Eighteenth-Century Sonata (Oxford: Oxford University Press, 2006). For a good critique, see
Julian Horton, Bruckners Symphonies and Sonata Deformation Theory, Journal of the Society for
Musicology in Ireland 1 (20052006): 517.
CHAPTER 5
Paradigmatic Analysis
167
6. For an early recognition, see Richard Cohn and Douglas Dempster, Hierarchical Unity, Plural
Unities: Toward a Reconciliation, in Disciplining Music: Musicology and Its Canons, ed. Catherine
Bergeron and Philip Bohlman (Chicago: University of Chicago Press, 1992), 156181; see also
Eugene Narmour, The Analysis and Cognition of Basic Melodic Structures: The ImplicationRealization Model (Chicago: University of Chicago Press, 1990).
168
PART I
Theory
Turning now to God Save the King, let us begin by using literal pitch identity as a criterion in associating its elements. Example 5.4 sets out the result
graphically. The 16 attack points are then summarized in a paradigmatic chart to
the right of the graphic presentation. What kinds of insight does this proceeding
make possible?
As unpromising as patterns of literal pitch retention might seem, this exercise
is nevertheless valuable because it displays the works strategy in terms of entities
that are repeated, including when, how often, and the rate at which new events
appear. (Critics who refuse to get their feet wet because they find an analytical
premise intuitively unsatisfying often miss out on certain valuable insights that
7. See Agawu, The Challenge of Musical Semiotics, 138160. My own analysis was preceded by that
of Jonathan Dunsby and Arnold Whittall in Music Analysis in Theory and Practice (London: Faber,
1988), 223225. See also the analysis of pattern and grammar by David Lidov in his Elements of
Semiotics (New York: St. Martins, 1999), 163170.
CHAPTER 5
Paradigmatic Analysis
169
Example 5.4. Paradigmatic structure of God Save the King based on pitch identity.
Units
Summary
1
2
5
4
7
8 9
10
11
12 13
14
15
16
Totals
8
3
6
3 1
10
11
12
14
13
15
16
emerge later; until you have worked through the analytical proceeding, you dont
really know what it can accomplish.) The horizontal succession 234, for example, describes the longest chain of new events in the work. This occurs close to
the beginning of the work. The vertical succession 125 (first column) shows the
predominance of the opening pitch in these opening measures. Strategically, it is
as if we return to the opening pitch to launch a set of departures. First, 1 is presented, then 234 follow, and finally 567 complete this phase of the strategy,
where the initial members of each group (1, 2, and 5) are equivalent. (This interpretation is based strictly on the recurrences of the pitch F and not on its function
nor the structure of the phrase.) A similar strategy links the group of units 17 to
1216. In the latter, units 1213 lead off, followed by 1415, and concluded by 16,
whereagainthe initiating 12, 14, and 16 are identical.
170
PART I
Theory
CHAPTER 5
Paradigmatic Analysis
171
single notes called events, but into groups of notes coinciding with notated bars.
Example 5.5 thus incorporates contour (itself an index of minimal action) as a
criterion. Units 1 and 3 rise by step, while units 4 and 5 descend through a third.
Since these 4 bars account for two-thirds of the work, it is not indefensible to let
these features guide a segmentation. Again, we can develop our narratives of both
paradigm and syntagm on this basis. As example 5.5 suggests, we begin with a pattern, follow it with a contrasting one, return to the first (only now at a higher pitch
level), continue with a new pattern, repeat this down a step, and conclude with a
final stationary unit. The contiguity of units 45 opposes the interruption of units
13 by 2 and leads us to imagine that, in the drive toward closure, saying things
again and again helps to convey the coming end.
Example 5.5. Paradigmatic structure of God Save the King based on melodic
contour.
1
Summary
1 2
3
4
5 6
3
Example 5.6. Paradigmatic structure of God Save the King based on rhythmic
patterns.
1
Summary
1
3
5
3
2
4
6
Listeners for whom rhythmic coherence considered apart from pitch behavior
is more dominant in this work will prefer a slightly different segmentation, the
one given in example 5.6. Here, the three paradigms are determined by a rhythmic
equivalence whereby units 1, 3, and 5 belong together; 2 and 4 also belong together;
172
PART I
Theory
while 6 stands alone. In this perspective, we are immediately struck by the alternation of values, a procedural duality that we might say defines the works syntagmatic strategy. Whatever ones commitments are to the proper way of hearing
this composition, it is obvious from comparing and aligning examples 5.5 and 5.6
that, while they share certain featuressuch as the uniqueness of the concluding
F, which, incidentally, example 5.4 deniesthey also emphasize different features.
Placing ones intuitions in a larger economy of paradigmatic analyses may perform
several critical functions for listeners. It may serve to reassure them of their way of
hearing, challenge that way by pointing to other possibilities, or show how remote
is their way from those revealed in paradigmatic analyses. This exercise may also
sharpen our awareness of the role of agency in listening by contextualizing the
paths we choose to follow when listening to a particular work. We may or may not
hearor choose to hearin the same general ways, but we can understand better
what we choose or choose not to choose while engaged in the complex activity we
call listening.
So far, we have relied on the implicit tonal-harmonic orientation of God Save
the King to determine our construction of patterns. But what if we try to analyze the work in full harmonic dress, as shown in example 5.7? When, some years
ago, David Lidov presented an analysis of a Bach chorale (O Jesulein sss) to
illustrate the paradigmatic method, he labeled the chords using figured bass and
roman numerals and then, discounting modulation, constructed a paradigmatic
scheme according to which the harmonic progressions fell into three paradigms.8
The question of whether that particular chorale represented something of an
exception in terms of its harmonic organization was not definitively answered by
Lidov. Although a certain amount of invariance usually occurs between phrases of
a chorale, the issue of what is norm and what is exception requires a larger comparative sample. For our purposes, a meaningful harmonic analysis must proceed
with idioms of harmonic usage, not with individual chords. Proceeding in this
belief means taking note of harmonic gesture. The harmonic gesture at the largest
level of God Save the King (example 5.7) begins with a 2-bar open phrase (IV in
bars 12) followed by a 4-bar closed phrase (IVI in bars 3end), the latter closing
deceptively at first (in bar 4) before proceeding to a satisfying authentic close (bars
56). Obviously, the initial IV progression is contained within the following
IVI progression, and while it is theoretically possible to detach that IV from the
subsequent IVI succession in order to see an internal parallelism between the
two harmonic phrases, the more palpable motivation is of an incomplete process
brought to completion, the latter unsuccessful at first attempt, and then successful
at the second. The identities of tonic and dominant chords are, in a sense, less pertinent than the hierarchy of closings that they engender, for it is this hierarchy that
conveys the meaning of the work as a harmonic stream. Reading the harmonized
God Save the King in terms of these dynamic forces downplays the role of repetition and so produces less compelling results from a paradigmatic point of view.
8. David Lidov, Nattiezs Semiotics of Music, Canadian Journal of Research in Semiotics 5 (1977):
40. Dunsby cites this same analysis in A Hitch Hikers Guide to Semiotic Music Analysis, Music
Analysis 1 (1982): 237238.
CHAPTER 5
Paradigmatic Analysis
173
vi
ii 6
vi
ii 6
V6
4
7
5
3
vi
ii 6
7
6 5
3
V4
None of this is to suggest that the realm of the harmonic is resistant to paradigmatic structuring, for as we saw in the generative exercises in chapter 4, the existence of differentially composed harmonic idioms makes possible analyses that are
alert to repetition. The point that seems to be emerging here, however, is that the
forces that constrain harmonic expression are often so linearly charged that they
seem to undermine the vertical thinkingthe intertextsnormally conveyed
in paradigmatic analysis. But let us see how the harmonic perspective of example
5.7 can be intensified by use of chromaticism and various auxiliary notes without
erasing the basic harmonic substructure. Example 5.8, based loosely on Brahms
manner, is a reharmonization of God Save the King. For all intents and purposes,
example 5.8 is a version of example 5.7: the tune is decorated here and there in 5.8,
and the structural harmonies are more or less the same. The two versions therefore
belong to the same paradigmatic class in the same way that variations on a theme
belong in principle to a single paradigmatic class. A middleground comparison
of the two harmonized versions reinforces that identity, but what a contrast at
the foreground level. Between various chord substitutions and auxiliary interpolations, this new version (example 5.8) seems to depart in significant ways from the
previous one. It is in connection with situations like this that the paradigmatic
approach to tonal music begins to meet some difficulty. The fact that a literal surface must be understood in terms of underlying motivations means that analysis
based on internal relations without recourse to prior (or outside) texts will likely
miss an important structural and expressive dimension. One cannot, in other
words, approach the kind of musical expression contained in example 5.8 with the
navet of the empiricist because what is heard is always already embedded in what
is functional but unsounded. We will confront this challenge in the coming analyses of Mozart and Beethoven, where some reliance on conventional idioms will
be invoked in order to render the surface accessible. Analysis must find a way, even
174
PART I
Theory
at this fundamental level, to redefine the object of analysis in a way that makes this
background accessible.
Returning to the unharmonized version of God Save the King (example 5.3),
we might explore still other ways of interpreting it as a bundle of repetitions. For
example, a listener struck by the interval of a descending minor third between bars
1 and 2 might take this as an indication of potential structural importance. The gap
is between the notes G and E, the two adjacencies to the tonic, F; in other words,
the gap encompasses the gateways to the tonic. It is also unique in the piece insofar
as all other movement is stepwise. As a gap, it demands in principle to be filled, and
this is precisely what happens in the penultimate bar with the GFE succession.
But we have encountered these notes before, specifically in bar 2, immediately after
the gap was announced, where the fill proceeds in the opposite direction, EFG.
In sum, a descending gap (between bars 1 and 2) is followed by an ascending fill
(bar 2) and complemented by a descending fill (bar 5).
In this light, other relations become apparent. We hear bars 4 and 5 in close
relation because, although rhythmically different, bar 5 is a sequence of bar 4 down
a step. We can thus relate bar 4, a descending major third, to the minor third processes mentioned previously. And, if we look beyond the immediate level, we see
that, taking the first (metrically accented) beats only of bars 4, 5, and 6, we have a
descending AGF progression, the same one that we heard on a more local level
within bar 4. The larger progression in bars 46 therefore has nested within it a
smaller version of its structural self. We might hear a similar expanded progression
across bars 13, this time in ascending order: F at the beginning of bar 1, G at the
end of bar 2, and A at the beginning of bar 3. This mode of thinking, unlike the
ones explored in example 5.4, depends on patterns and diminutions. The literalism that marked early demonstrations of the paradigmatic method by Ruwet and
Nattiez was unavoidable in part because the objects of analytical attention seemed
to be made from less familiar musical languages. But for a heavily freighted tonal
language like that of God Save the King, we need an expanded basis for relating
phenomena.
These remarks about God Save the King will, I hope, have suggested possibilities for the construction of musical meaning based on criteria such as literal pitch
retention, association of pitch-based and rhythm-based patterns, diatonic and
chromatic harmonization, and an intervallic discourse determined by gaps and
fillsor some combination of these. But the monodic context of God Save the
King is relatively restricted, and some may wonder how we might proceed from
it to a more complex musical work. Answers to this question may be found in the
more methodologically oriented studies by Ruwet, Nattiez, Vacarro, and Ayrey.9
Because my own interests here are more directly analytical, I will forgo discussion
9. Nicholas Ruwet, Methods of Analysis in Musicology, trans. Mark Everist, Music Analysis 6 (1987):
1136; Jean-Jacques Nattiez, Varses Density 21.5: A Study in Semiological Analysis, trans. Anna
Barry, Music Analysis 1 (1982): 243340; Jean-Michel Vaccaro, Proposition dun analyse pour une
polyphonie vocale dux vie sicle, Revue de musicology 61 (1975): 3558; and Craig Ayrey, Debussys
Significant Connections: Metaphor and Metonymy in Analytical Method, in Theory, Analysis and
Meaning in Music, ed. Anthony Pople (Cambridge: Cambridge University Press, 1994), 127151.
CHAPTER 5
Paradigmatic Analysis
175
10. A. B. Marx, Die Lehre von der musikalischen Komposition, praktisch-theoretisch (Leipzig: Breitkopf
& Hrtel, 18371847), quoted by Schenker in Der Tonwille, 66.
11. See Schenker, Der Tonwille, 58, figure 2. There are slight differences between my representation and
Schenkers. I have kept note values in treble and bass throughout, retained Mozarts literal registers,
added a continuous figured bass between the two staves, and dispensed with roman numerals.
176
PART I
Theory
of basic motion. The first is an open (IV) progression, such as we have in the first 4
bars of the movement. (See unit 1 and its return as unit 19; see also units 5, 18, and
21. Unit 16 is also open since it expresses a V prolongation.) Column B represents
a closed (IVI) progression, such as we have in bars 58 of the movement (unit
2), although it, too, can be modified to begin on I6, ii, or IV. Column B is by far the
most populated paradigm of the three. Column C, the least populated, expresses a
linear intervallic pattern (10101010) enlivened by 98 suspensions. It occurs
only once, near the end of the so-called development section, the point of furthest
remove (in Ratners terminology).12 If, as stated earlier, the purpose of the analysis
is to establish the conditions of possibility for this particular movement, then once
we have figured out how to improvise these three kinds of tonal expressionan
open one, a closed one, and a sequential, transitional progressionusing a variety of stylistic resources, the essential analytical task has been fulfilled. The rest is
(harmless) interpretation according to the analysts chosen plot.
It is possible to tell many stories about this movement on the basis of the demonstration in examples 5.9 and 5.10. For example, virtually all of the units in the first
reprise except unit 1 are column B units. Because each unit is notionally complete
that is, it attains syntactic closurethe narrative of this first reprise may be characterized as a succession of equivalent states or small worlds. We might say therefore
that there is something circular about this first reprise and that this circular tendency,
operative on a relatively local level, counters the larger, linear dynamic conferred by
the modulatory obligations of what is, after all, a sonata-form movement.
Other stories may be fashioned around parallelism of procedure. For example, the
second reprise begins with two statements of unit 2 (units 13 and 14), now in the dominant. But this same unit 2 was heard at the end of the first reprise (units 11 and 12).
The music after the bar line thus expands the temporal scope of the music before the
bar line. Since, however, units 11 and 12 functioned as an ending (of the first reprise),
while 1314 function as a beginning (of the second reprise), their sameness at this level
reminds us of the reciprocal relationship between endings and beginnings.
We might also note the uniqueness of unit 17, which marks a turning point
in the movement. Heard by itself, it is a classic transitional unit; it denies both
the choreographed incompletion of column A units and the closed nature of the
widespread column B units. It could be argued that strategic turning points as
represented by unit 17 are most effective if they are not duplicated anywhere else
in the movement. This interpretation would thus encourage us to hear the movement as a whole in terms of a single trajectory that reaches a turning point in unit
17. Finally, we might note the prospect of an adumbrated recapitulation in the
succession of units 18 and 19. While the microrhythmic activity on the surface of
18 forms part of the sense of culmination reached in the development, and while
the onset of unit 19 is an unequivocal thematic reprise, the fact that the two units
belong to the same paradigm suggests that they are versions of each other. No
doubt, the thematic power of unit 19 establishes its signifying function within the
form, but we might also hear unit 18 as stealing some of unit 19s thunder.
12. Leonard G. Ratner, The Beethoven String Quartets: Compositional Strategies and Rhetoric (Stanford,
CA: Stanford Bookstore, 1995), 332.
CHAPTER 5
Paradigmatic Analysis
177
Example 5.9. Outer voice reduction (after Schenker) of Mozarts Piano Sonata in
A Minor, K. 310, second movement.
Andante cantabile con espressione
1
6
5
4
3
6
5
4
3
R
2
4
3
33
6 ---- 5
4 ----
8 -- 7
6 -- 5
4 --
8 ------ 7
4 ---
6
4
15
14
R
2
11
5
6 6 -4 --
4
3
6
5
6
4
7 -- 8
4 -- 3
10 --9 -- 8
10 ----- 7
9 -- 8
4 -- 3
56
8 ---- 7
6
5
6 -- 5
4 -- 3
23
4
2
6
4
5
3
4
2
6 ---------------
6
5
6
5
6 -- 5
4 -- 3
27
4 -- 3
25
26
24
78
10 ----- 7
9 -- 8
21
4 -- 3
22
7 -- 8
4 -- 3
19
4 ---- 3
20
6 -- 5
4 --
13
18
10 --9 -- 8
17
7
#
45
12
16
6
5
10
6 -- 5
4 -- 3
6 -- 5
4 -- 3
68
4 -- 3
11
22
8 ------------ U
6
4
M
4
28
178
PART I
Theory
18
21
16
(continued)
Readers will have their own stories to tell about this movement, but I hope
that the paths opened here will prove helpful for other analytical adventures. I
have suggested that, if we agree that the slow movement of Mozarts K. 310 can
be recast as an assembly of little progressionsmostly closed, independently
meaningful expressions of harmonic-contrapuntal orderthen the inner form
of the movement might be characterized as a succession of autonomous or
semiautonomous small worlds. Units follow one another, but they are not necessarily continuous with one another. If anything rules in this Mozart movement, it is discontinuous succession. Paradigmatic analysis helps to convey that
quality.
CHAPTER 5
Paradigmatic Analysis
179
3 , 4 = 22 , 23
7 = 22 , 23
9 = 25
10 = 26
11 , 12 = 27 , 28
13
14
15
24
(continued)
becomes
17
10
10
(10)
10
10
becomes
bars 21-22
bars 23-24
becomes
bars 5-8
bars 24-28
CHAPTER 5
Paradigmatic Analysis
181
The harmonic means are radically simple, and the reader is invited to realize them at the piano using example 5.11 as guide. Start by playing an archetypal
closed progression (1), expand it by incorporating a predominant sonority that
intensifies the motion to the dominant (2), truncate the progression by withholding the closing tonic while tonicizing the dominant (thus creating a desire for resolution; 3, 4), repeat the truncated progression in transposed form (5), and restore
closure using the conventional progression previously used in 2 (6).
With these resources, we are ready to improvise Chopins prelude as a harmonic stream (example 5.12). The prelude as a whole breathes in eight (often overlapping) units or segments. These are summarized below (successive bar numbers
are listed to convey the relative sizes of units):
Segment
Bars
1
2
3
4
5
6
7
8
12345
5678
9 10 11 12
12 13 14 15 16 17 18 19 20
20 21 22
22 23 24
24 25 26 27 28
29 30 31 32 33 34 35 36 37 38
We may call this the chronological order in which events unfold. But what if
we order the segments according to a logical order? Logical order is qualitative; it
is not given, but has to be postulated on the basis of certain criteria. Let us privilege
the harmonic domain and allow the paradigmatic impulse to dictate such logic. We
begin with closed progressions, placing the most expansive ones in front of the less
expansive ones. Units 8 and 4 are the most expansive, while units 1 and 3 are less
expansive. Next is a unit that attains closure but is not bounded by the tonic chord,
as the previous four units are. This is unit 7. Finally, we add units that, like 7, reach
closure but on other degrees of the scale. Two close on the dominant (2 and 5),
and one closes on the subdominant (6). The logical order thus begins with closed,
bounded units in the tonic, continues with closed unbounded units in the tonic, and
finishes with closed unbounded units on other scale degrees (first dominant, then
subdominant). In short, where the chronological form is 1 2 3 4 5 6 7 8, the logical
form is 8 4 1 3 7 2 5 6, which, following precedent, may be written as follows:
Segment
Bars
8
4
1
3
7
2
5
6
29 30 31 32 33 34 35 36 37 38
12 13 14 15 16 17 18 19 20
12345
9 10 11 12
24 25 26 27 28
5678
20 21 22
22 23 24
182
PART I
Theory
Example 5.12. Chopin, Prelude in F-sharp Major, op. 28, no. 13.
5
legato
5
13
17
(p)
(dim)
21
sostenuto
24
(continued)
CHAPTER 5
Paradigmatic Analysis
183
27
30
(
)
(riten.)
34
Many stories can be told about this prelude. First, if we ignore the global tonicizations of dominant and subdominant, then all of the units in the prelude are
closed. In that sense, a paradigmatic analysis shows a single paradigm. This stacking of the vertical dimension speaks to a certain economy of means employed
by Chopin. It also suggests the same kind of aggregative tendency we isolated in
Mozartin colloquial terms, the same thing is said again and again eight times.
Second, the most expansive closed progressions are placedstrategically, well
have to sayat the end of the prelude and in the middle (units 8 and 4, respectively). Unit 8 provides the final culmination while unit 4 serves as a preliminary
point of culmination. Third, the opening unit is in fact only a less elaborate form of
units 8 and 4; this means that the prelude begins with a closed progression of modest dimensions, which it then repeats (3) and expands not once but twice (4 and 8).
This speaks to a cumulative or organic strategy.
Again, it should be acknowledged that we can order the logic differently. For
example, we might start not with the most expansive but simply with a well-formed
cadence, then go to the more expansive ones. Or, we might treat the tonicizations
of V and IV in primary reference to the cadences in the home key. Whatever order
analysts settle on, they are obliged to explain its basis. We might also note the
affinities between generative analysis (as practiced in the previous chapter) and
184
PART I
Theory
Va
Vc
Two-voice
reduction
4
3
8
6
4
5
3
Although some of them may be self-evident, the criteria for dividing up the
movement into meaningful units (sense units, utterances, ideas, phrases, periods)
require comment. The challenge of segmentation remains at the heart of many
analytical endeavors, and while it seems more acute in relation to twentiethcentury works (as Christopher Hasty and others have shown),13 it is by no means
straightforward in Beethoven. Following the precedent set in the Mozart and
13. Christopher Hasty, Segmentation and Process in Post-Tonal Music, Music Theory Spectrum 3
(1981): 5473.
CHAPTER 5
Paradigmatic Analysis
185
Chopin analyses, I have opted for the most basic of conventional models of tonal
motion based on the IVI nexus as the principal criterion. Some support for this
proceeding may be garnered from Adorno:
To understand Beethoven means to understand tonality. It is fundamental to his
music not only as its material but as its principle, its essence: his music utters the
secret of tonality; the limitations set by tonality are his ownand at the same time
the driving force of his productivity.14
Whatever else this might mean, it engenders an idea that seems tautological at first
sight, but actually fuses a historical insight with a systematic one, namely, foundations as essences. The implication for Beethoven analysis is that the most fundamental modes and strategies of tonal definition constitute (part of) the essence of
his music. The analytical task, then, is to discover or uncover ways in which these
essences are thematized.
Based on a normative principle of departure and return, the fundamental
mode of tonal definition may be glossed as a IVI harmonic progression and subject to a variety of modes of expression and transformation.15 I could, for example,
tonicize V, prolong it by introducing a predominant element, or prolong it even
further by tonicizing the predominant. More radically, I might truncate the model
by deleting part of its frame. Thus, instead of the complete IVI chain, I may opt
for an auxiliary VI progression, in which form it still attains closure but relinquishes the security of a beginning tonic. Or, I may prefer a IV progression, in
which form the model is understood as open and incomplete. Modifications are
understood in reference to the larger, complete progression.
Stated so abstractly, these procedures sound obvious, mechanical, and uninspiring. Indeed, they belong at the most elementary level of tonal expression. And
yet, without some sense of what is possible at this level, one cannot properly appreciate the ecology (or economy, or horizon) that sustains a given composition. If
music is language, a language based in conventional codes, then it is trivial to have
to point out that, by 1800, Beethoven the pianist, improviser, composer, student
of counterpoint, and copier of other peoples scores, thoroughly spoke a language
enabled by what might be called the first utterances of the tonal system: IVI
progressions and what they make possible.
What, then, are the specific tokens of these first utterances of the tonal system,
and how are they composed out in the course of the first movement of op. 18, no.
3? Example 5.14 provides a broad orientation to the movement by summarizing
a segmentation of the movement into 40 sense units (left column) and indicating
some of the topical references that might be inferred from the sounding surfaces
14. Theodor Adorno, Beethoven: The Philosophy of Music, trans. Edmund Jephcott, ed. Rolf Tiedemann
(Stanford, CA: Stanford University Press, 1998), 49.
15. On tonal models as determinants of style, see Wintle, Corellis Tonal Models, 2969; and Agawu,.
Example 5.14. Structural units and topical references in Beethoven, op. 18, no. 3/i.
Unit
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Bars
110
1127
2731
3135
3639
4043
4345
4557
5767
6871
7275
7682
8290
9094
94103
104110
108116
116122
123126
127128
129132
132134
134138
138142
143147
147156
158188
188198
199202
203206
207213
213221
221225
225234
235241
239247
247250
251255
255259
259269
Topical references
alla breve, galant
alla breve, cadenza, messanza figures
alla breve, stretto, galant
bourre
bourre
bourre
bourre
Sturm und Drang, brilliant style, march
brilliant style, fantasia, march
march, alla zoppa
march, alla zoppa
fanfare
fanfare
musette
musette
fantasia
alla breve
alla breve, fantasia
bourre
bourre
bourre
bourre
alla breve, furioso, Sturm und Drang
alla breve, furioso, Sturm und Drang
alla breve, furioso, Sturm und Drang
Sturm und Drang, concitato style
alla breve, ricercar style, march
brilliant style, march
march, alla zoppa
march, alla zoppa
fanfare
fanfare
musette
musette
fantasia
alla breve
march, alla zoppa
march, alla zoppa
alla breve, stretto, messanza figures
alla breve
CHAPTER 5
Paradigmatic Analysis
187
Example 5.15. Paradigmatic display of all 40 units in Beethoven, op. 18, no. 3/i.
Models
1c
1
2
1b
1a
1d
3
Exposition
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Development
19
20
21
22
23
24
25
26
27
Recapitulation
28
29
30
31
32
33
34
35
Coda
36
37
38
39
40
(right column). Example 5.15 lists the same 40 units only now in reference to a
series of tonal models and a conventional sonata-form outline. As will be immediately clear from the examples that follow, each of the 40 units belongs to one of
three models, which I have labeled 1 (with subdivisions 1a, 1b, 1c, 1d), 2, and 3.
188
Theory
PART I
Model 1 expresses a closed IVI progression, while models 2 and 3 express open
and closed IV and VI progressions, respectively. Let us explore the tonal models
in detail.16
Model 1a, shown in abstract in example 5.16 and concretely in example 5.17,
features a straightforward closed progression, Iii6/5VI, perhaps the most basic
of tonal progressions. Example 5.17 aligns all seven units affiliated with the model.
Its first occurrence is in the tonic, D major, as unit 5 in a portion of the movement
that some have labeled a bridge passage (bars 36ff.) The melodic profile of this
unit ending on scale-degree 3 rather than 1confers on it a degree of openness appropriate for a transitional function, this despite the closed harmonic form.
Conflicts like thisbetween harmonic tendency (closed) and melodic tendency
(open)are important sources of expressive effect in this style.
Example 5.16. Origins of model 1a units.
or
6
II 5
6
II 5
Other instantiations of the model are units 6 (in B minor), 19 (in B-flat major),
and 21 (in G minor), all of which display the same thematic profile. Units 7 in A
minor and 22 in G minor abbreviate the progression slightly, beginning on firstinversion rather than root-position chords. And unit 20 in B-flat major withholds
the normative continuation of the unit after the tonic extension of the models first
2 bars.
All seven units5, 6, 7, 19, 20, 21, 22belong to a single paradigmatic class.
They are harmonically equivalent, tonally disparatea factor bracketed at this
levelthematically affiliated, and melodically similar. Of course, they occur at
different moments in this sonata-form movement. Unit 5, for example, begins
the transition to the second key, but it also is imbued with a sense of closing by
virtue of its harmony. Unit 6 signals tonal departure, while unit 7 intensifi es the
sense of departure with a shift from the B minor of the previous unit to A minor.
These units thus perform different syntagmatic functions while sharing a network
of (harmonic) features. Ultimately, it is the tension between their equivalence
16. Alternative segmentations of the movement are, of course, possibleeven desirable. My interest
here, however, is in the plausibility of the units isolated, not (primarily) in the relative merits of
competing segmentations. Whenever my chosen criteria seem especially fragile, however, I offer
some explanation.
CHAPTER 5
Paradigmatic Analysis
189
5
3
6
5
36
40
123
21
127
129
6
5
6
!5
43
20
7
6
19
6
5
3
6
5
6
5
22
132
190
PART I
Theory
6
5
4
2
6
5
27
4
31
15
3
4
2
#6
7
#
94
3
34
4
2
4
2
#6
7
#
4
2
225
14
90
33
221
17. Eduard Hanslick, Vom Musikalisch-Schnen [On the Musically Beautiful], trans. Martin Cooper, in
Music in European Thought 18511912, ed. Bojan Bujic (Cambridge: Cambridge University Press,
1988), 19; Roman Jakobson, Language in Relation to Other Communication Systems, in Jakobson, Selected Writings, vol. 2 (The Hague: Mouton, 1971), 704705, adapted in Agawu, Playing with
Signs, 5179; and Lawrence M. Zbikowski, Musical Communication in the Eighteenth Century,
in Communication in Eighteenth-Century Music, ed. Danuta Mirka and Kofi Agawu (Cambridge:
Cambridge University Press, 2008).
CHAPTER 5
Paradigmatic Analysis
191
authentic cadence. Example 5.19 shows six passages totaling 42 bars that express this
model. First is unit 3, followed immediately by a registral variant, unit 4. Notice how
unit 33, which recapitulates unit 14, incorporates the flattened-seventh within a walking bass pattern. The thematic profile of model 1b is more varied than that of model
1a, all of whose units belong topically to the sphere of bourre. Here, units 3 and 4
are alla breve material (that is to say, they occupy an ecclesiastical register), whereas
the others (15, 34, 14, and 33) invoke the galant style with an initial musette inflection
(this signals a secular register). Note also that units 15 and 34 exceed the normative
5-bar length of the other model 1b units. This is because I have incorporated the reiterated cadential figures that emphasize the dominant key, A major, in the exposition
(unit 15) and its rhyme, D major, in the recapitulation (unit 34). Again, remember
that all model 1b units are harmonically equivalent but thematically differentiated.
With model 1c (whose origins are displayed in example 5.20), we come
again, as it turns outto the central thematic material of the movement, the opening 10-bar alla breve and its subsequent appearances. Like models 1a and 1b, its
units (gathered in example 5.21) might be heard as tonally closed (all except 17
and 36, which end in deceptive cadences that, however, may be understood as
substitutions for authentic cadences), but its journey is a little more protracted.
Moreover, the model begins with a prefix (shown in example 5.20) that does both
thematic and harmonic work. Thematic work is evident in the recurrence of the
rising minor seventh (sometimes in arpeggiated form) throughout the movement;
harmonic work is evident in the tonicization of individual scale degrees that the
seventh facilitates as part of a dominant-seventh sonority.
Example 5.20. Origins of model 1c units.
Prfix
^3
^2
6
II5
^1
192
Theory
PART I
8
6
4
5
3
1
2
#6
5
6
5
6
5
#6
5
20
11
27
4
3
$
5
165
158
#6
5
6
!5
6
!
6- 5
6
5
182
175
17
6
5
6
5
!
8
6
6
5
6
4
4
2
5
4-3
108
36
!5
239
39
255
40
6
4
7
5
3
259
CHAPTER 5
Paradigmatic Analysis
193
erased, by the overlap between each unit and its immediate successor; this also
subtracts from the impression of closure within each unit. Indeed, the mobility of
model 1d units contributes not a little to the movements organicism.
Example 5.22. Origins of model 1d units.
^
3
^
2
^
1
^3
^2
^1
or
4 -
#6
6
#4
#6
5
#6
4 - 3
#6
5
6
!5
!7
5
#
#
76
13
6
4
7
5
3
#6
4
7
5
3
82
31
7
207
32
#6
5
213
18
5
4
8
5
!7
6
5
116
8
#6
5
#6
5
#6
5
7
#
7
#
6
5
7
#
45
The most disguised of the units in example 5.23 is unit 8, a Sturm und Drang
passage within the second key area. In example 5.24, I read it as a IVI progression, the opening I being minor, its closing counterpart being in major, and the
long dominant being prolonged by a German sixth chord. Harmonically, example 5.24 is plausible, but it is the rhetorically marked threefold repetition of the
augmented sixth-to-dominant progression that contributes to the disguise. Note,
also, that in the second half of unit 8, the treble arpeggiates up a minor seventh
before beginning its descent to the tonic. The association with the opening of unit
1 (model 1c) is thus reinforced.
194
PART I
Theory
Ger.6 II
In an important sense, all of the units gathered under models 1a, 1b, 1c, and
1d do more or less the same harmonicbut not tonalwork. A certain degree
of redundancy results, and this in turn contributes a feeling of circularity to the
formal process. It is as if these 26 units, which comprise 196 out of a total 269
bars (72.86% of the movement) offer a consistent succession of small worlds. The
process is additive and circular. The self-containment and potential autonomy of
the building blocks counters the normative goal-oriented dynamic of the sonata
form.
Models 2 and 3 express IV and VI progressions, respectively (see the
abstracts in examples 5.25 and 5.28). These progressions are not syntactically
defective; rather, they are notionally incomplete. Playing through the model 2
units displayed in example 5.26 immediately makes evident a progression from a
local tonic to its dominant (units 10, 11, 29, 30, 37, 26), or from a local tonic to a
nontonic other (units 9 and 28). Unit 10, which heads model 2 units, is a 4-bar
antecedent phrasea question that might have been answered by a 4-bar consequent. The answer, however, is a transposition of this same unit from C major
into its relative minor, A minor. Thus, unit 11 simultaneously embodies contradictory qualities of question and answer at different levels of structure. Units 29 and
30 replay this drama exactly, now in F major and D minor, respectively, while unit
37, initiator of the coda, is answered by a unit that belongs to model 3 (unit 38).
Example 5.25. Origins of model 2 units.
^3
^2
^3
^2
or
Other model 2 units are only tenuously related to the model. Unit 26, for example, whose abstract is given as example 5.27, is the striking passage at the end of the
development that closes on a C-sharp major chord (where C-sharp functions as
V of F-sharp minor); it marks the point of furthest remove. Although it is shown
as a IV progression, its beginning (bars 147148) is less strongly articulated as a
phrase-structural beginning and more a part of an already unfolding process. Nevertheless, it is possible to extract a IV progression from it, the V prolonged by an
augmented-sixth chord. Units 9 and 28 are IV progressions only in a metaphorical sense; essentially, they both resist synopsis. Unit 9 begins in a stable A major
with a bass arpeggiation that dissipates its dominant-seventh over 4 bars and then,
CHAPTER 5
Paradigmatic Analysis
195
6
5
#4
2
4
3
6
4
4
2
#4
2
4
3
6
4
5
3
6
4
!2
6
!
!6
!5
4
3
#2
#4
3
6
5
4
3
#2
#4
3
!5
6
!5
#6
68
11
5
#
72
29
199
30
203
37
!5
247
9
57
28
188
26
#6
#6
#5
147
It. 6th
196
PART I
Theory
its own principles. The seams of Beethovens craft show in these moments; we are
reminded of a speaking subject. (Note, incidentally, that evidence for narrativity in
Beethoven is typically lodged in transitional sections, where the utterance is often
prose-like, as in units 9 and 28, rather than stanzaic or poetic.)
The third and last of our models reverses the process in model 2, progressing from V (sometimes preceded by a predominant sonority) to I. Th e conceptual
origins displayed in example 5.28 show a passing seventh over a dominant and a
chromaticization of that motion. Stated so abstractly, the process sounds straightforward enough, but listening to the transpositionally equivalent units 16 and 35,
for example (included in the display of model 3 units in example 5.29), suggests
that the effect may sometimes be complex. An underlying VI progression is hearable in retrospect, but because the individual chords sound one per bar, followed
in each case by loud rests, the listeners ability to subsume the entire succession
under a single prolongational span becomes difficult. Ratner refers to a series of
peremptory, disembodied chords, a play of cut against flow. These chords represent
total disorientation.18 This may be slightly overstated, but he is surely right to draw
attention to their effect. Unit 23 features a normative predominant-dominant-tonic
progression in A minor, which becomes the model for a sequence encompassing
units 24 and 25. The latter two feature 2 and 3 bars of predominant activity, before
closing on a B minor and an F-sharp minor chord, respectively.
To summarize: the first movement of op. 18, no. 3, is made from a number of
simple two-voice progressions common in the eighteenth century and embodying
the basic utterances of the tonal system. By our segmentation, there are essentially
three of these models: IVI, IV, and VI. For the complete IVI progression, we
identified four variants: model 1a with seven units, model 1b with six units, model
1c with 7 units, and model 1d with six units. For the open or IV model 2, we identified eight units, while the closing but not closed VI model 3 has six units. We
have thus heard the entire first movement of op. 18, no. 3; there are no remainders.
We have heard it, however, not in Beethovens temporal or real-time order but in
a new conceptual order.
If we refer back to example 5.15, we see the succession of models at a glance
and some of the patterns formed by the 40 units. I have also indicated affinities with the sonata form in order to facilitate an assessment of the relationship
between the paradigmatic process conveyed by repetition and the linear or syntagmatic process implicit in the sonata forms normative narrative. These data can be
interpreted in different ways. For example, model 1c, the main alla breve material,
Example 5.28. Origins of model 3 units.
becomes:
V8 - 7
becomes:
CHAPTER 5
Paradigmatic Analysis
197
6
5
!5
!6
4
6
5
6
5
9
7
#
8
7
#
9
7
#
8
7
#
104
35
6
5
235
38
251
23
134
24
138
25
6
4
#6
5
4
3
143
begins and ends the movement and also marks the onset of both the development
and the recapitulation. It acts as a fulcrum; its sevenfold occurrence may even
be read as gesturing toward rondo form. Unit 1a, the little bourre tune, occurs
only in clusters (5, 6, 7 and 19, 20, 21, 22). In this, it resembles unit 2, the opening
out IV progression. Unit 1b, the alla breve in stretto, is totally absent from the
development section. Model 3, an expanded cadence, functions like 1c in that it
occurs roughly equidistantly in the exposition, development, and recapitulation.
On another level, one could argue that the essential procedure of the movement is
198
PART I
Theory
that of variation since, in a deep sense, models 1a, 1b, 1c, and 1d are variants of the
same basic harmonic-contrapuntal progression. This is not a theme and variations
in the sense in which a profiled theme provides the basis for thematic, harmonic,
modal, and figurative exploration. Rather, variation technique is used within the
more evident sonata-form process. Obviously, then, the paradigmatic analysis can
support several different narratives.
CHAPTER 5
Paradigmatic Analysis
199
Strophe endings are less straightforward and altogether more interesting. Strophe 1 ends in E-flat minor, the modal opposite of the E-flat major that began the
song and a direct correlative of the change in sentiment from basking in nature
to admitting sadness. Brahms distributes the elements of closure so as to create a
wonderfully fluid ending which doubles in function as interlude (a pause for the
singer to take a breath and for the listener to reflect just a little on the message of
2
1
this unfolding tone poem) and as prelude to the next strophe. The singers 3
(bars 1213) sounds over a dominant pedal, so the downbeat of bar 13 attains
melodic closure but limited harmonic closure; the actual harmonic cadence comes
a bar later (in the second half of bar 14). This displacement is possible in part
because bars 1314 replay 12, only now in a modal transformation and with a
more decisive ending. In this way, the work of closing is achieved even as Brahms
lets the piano announce a new beginning by using the familiar material of the
songs inaugural motif.
Closure in strophe 2 is achieved in the wake of a melodramatic high point on
Trne (tears; bars 2930). The singer stops on scale-degree 5 (E-flat in bar 31), leaving
the pianist to complete the utterance by domesticating the 6/4 into a conventional
cadential group. The 6/4 chord in bar 31 is duly followed by a dominant-seventh in
32, complete with a 4/3 suspension, but the expected tonic resolution is withheld.
Unlike the close at the end of strophe 1 (bars 1214), this one has a heightened
indexical quality, pointing irresistibly to the beginning of another strophe.
200
PART I
Theory
The last of our closings also functions globally for the song as a whole; not
surprisingly, it is also the most elaborate of the three. The phrase containing the
previous high point on Trne (2731) is repeated (as 3943) and is immediately
superseded by a rhetorically more potent high point on the repeated word heisser
in bar 45, an F-flat chord that functions as a Neapolitan in the key of E-flat major.
And, as in strophe 1, but this time more deliberately, the singers closing line
descends to the tonic (bars 4748), achieving both harmonic and melodic closure
on the downbeat of bar 48. We are assured that this is indeed the global close. Now,
the motif that opened the work rises from the piano part (bar 48) through two
octaves, ending on a high G in the last bar. The effect of this twofold statement of
the motif is itself twofold. First, the motif here sounds like an echo, a remembered
song. Second, in traversing such a vast registral expanse in these closing 4 bars, the
motif may be heard echoing the active (vocal) registers of the song. Since, however,
it reaches a pitch that the voice never managed to attain (the singer got as far as
F-flat in bars 2122 and again in 45), this postlude may also be heard carrying on
and concluding the singers business by placing its destination in the beyond, a
realm in which meaning is pure, direct, and secure because it lacks the worldliness
of words. As always with Brahms, then, there is nothing perfunctory about the
pianists last word.
The fact that the BEFG motif was heard at the beginning of the song,
again in the middle, and finally at both the beginning and the end of the ending, fosters an association among beginnings, middles, and endings and suggests
a circularity in the formal process. It would not be entirely accurate to describe
the form of Die Mainacht as circular, however. More accurate would be to hear
competing tendencies in Brahms material. Strophic song normatively acquires its
dynamic sense from the force of repetition, which is negotiated differently by different listeners and performers. Strophic song is ultimately a paradoxical experience, for the repetition of strophes confers a static quality insofar as it denies the
possibility of change. Some listeners deal with this denial by adopting a teleological mindset and latching on to whatever narratives the song text makes possible.
A modified strophic form, on the other hand, speaks to the complexity of strophic
form by undercomplicating it, building into it a more obvious dynamic process. In
Die Mainacht, the high-point scheme allows the imposition of such a narrative
curve: D-flat in strophe 1, first F-flat and thenmore properlyE-flat in strophe
2, and a frontal F-flat in strophe 3.
I have strayed, of course, into the poetics of song by speculating on the meanings made possible through repetition and strophic organization. Without forcing
the point, I hope nevertheless that some of this digression will have reinforced a
point made earlier, namely, that there is a natural affinity between ordinary music
analysis and so-called paradigmatic analysis insofar as both are concerned on
some primal level with the repetition, variation, or affinity among units. Still, there
is much more to discover about the song from a detailed study of its melodic content. Let us see how a more explicitly paradigmatic approach can illuminate the
melodic discourse of Die Mainacht.
Example 5.31 sets out the melodic content of the entire song in 20 segments (see
the circled units), each belonging to one of two paradigms, A and B. Example 5.32
CHAPTER 5
Paradigmatic Analysis
201
then rearranges the contents of 5.31 to make explicit some of the derivations and,
in the process, to convey a sense of the relative distance between variants. Example
5.33 summarizes the two main generative ideas in two columns, A and B, making
it possible to see at a glance the distribution of the songs units. One clarification is
necessary: the mode of relating material in example 5.32 is internally referential.
That is, the generating motifs are in the piece, not abstractions based on underlying contrapuntal models originating outside the work. (The contrast with the
Beethoven analysis is noteworthy in this regard.) We are concerned entirely with
relations between segments. To say this is not, of course, to deny that some notion
of abstraction enters into the relating of segments. As always with these charts, the
story told is implicit, so these supplementary remarks will be necessarily brief.
Example 5.31. Melodic content of Die Mainacht arranged paradigmatically.
Strophe 1
Strophe 2
10
11
13
12
Strophe 3
14
16
18
15
17
19
20
From the outset, Brahms presents a melodic line that has two separate and
fundamental segments. It is these that have determined our paradigms. The first,
unit 1, the originator of paradigm A units, encompasses the first bar and a half
of the vocal part, itself based on the pianists humming in bars 12. This overall
ascending melody, rising from the depths, so to speak, carries a sense of hope (as
202
PART I
Theory
not
but 1
not
not
but 3
but 4
not
not
not
not
but 5
but 6
(continued)
Deryck Cooke might say). Although the associated harmony makes it a closed
unit, the ending on a poetic third (scale-degree 3) slightly undermines its sense
of closure.
The second or oppositional segment, unit 2 (bars 45), the originator of paradigm B units, outlines a contrasting descending contour, carrying as well a sense
of resolution. Just as the inaugural unit (1) of paradigm A went up (BEF)
and then redoubled its efforts to reach its goal (EFG), so paradigm Bs leading
unit (2) goes down (BAG) before redoubling its efforts (BGE) to reach
its goal. Each gesture is thus twofold, the second initiating the redoubling earlier
than the first.
It could be argued on a yet more abstract level that paradigms A and B units
are variants of each other, or that the initiator of paradigm B (unit 2) is a transformation of the initiator of paradigm A (unit 1). Both carry a sense of closure on
their deepest levels, although the stepwise rise at the end of unit 1 complicates its
sense of ending, just as the triadic descent at the end of unit 2 refuses the most
conventionally satisfying mode of melodic closure. Their background unity, however, makes it possible to argue that the entire song springs from a single seed.
CHAPTER 5
203
Paradigmatic Analysis
not
but 8
but 7
not
not
but 9
but 10
not
but 11
not
not
but 12
not
(
but
13
Indeed, this is a highly organic song, as I have remarked and as will be seen in
the discussion that follows, and this view is supported by a remark of Brahms
concerning his way of composing. In a statement recorded by Georg Henschel,
Brahms refers specifically to the opening of Die Mainacht. According to him,
having discovered this initial idea (unit 1 in 5.30), he could let the song sit for
months before returning to it, the implication being that retrieving it was tantamount to retrieving its potential, which was already inscribed in the myriad variants that the original made possible:
204
PART I
Theory
15
16
17
18
not
but 19
not
but 20
There is no real creating without hard work. That which you would call invention,
that is to say, a thought, an idea, is simply an inspiration from above, for which
I am not responsible, which is no merit of mine. Yes, it is a present, a gift, which
I ought even to despise until I have made it my own by right of hard work. And
there need be no hurry about that, either. It is as with the seed-corn; it germinates
unconsciously and in spite of ourselves. When I, for instance, have found the first
phrase of a song, say [he cites the opening phrase of Die Mainacht], I might shut
the book there and then go for a walk, do some other work, and perhaps not think
of it again for months. Nothing, however, is lost. If afterward I approach the subject
again, it is sure to have taken shape: I can now begin to really work it.20
And so, from the horses own mouth, we have testimony that justifies the study of
melodic progeny, organicism, and, implicitly, the paradigmatic method.
Let us continue with our analysis, referring to example 5.32 and starting with
paradigm A units. The piano melody carries implicitly the rhythmically differentiated unit 1. Unit 3 is a transposition of 1, but what should have been an initial C is
replaced by a repeated F. Unit 5, too, grows out of 1, but in a more complex way, as
shown. It retains the rhythm and overall contour of units 1 and 3 but introduces
a modal change (C-flat and D-flat replace C and D) to effect a tonicization of the
third-related G-flat major. Unit 7 begins as a direct transposition of 1 but changes
course in its last three notes and descends in the manner of paradigm B units. (At
20. George Henschel, Personal Recollections of Johannes Brahms (New York: AMS, 1978; orig. 1907), 111.
CHAPTER 5
Paradigmatic Analysis
205
a more detailed level of analysis, we hear the combination of both paradigm A and
B gestures in unit 7.) Unit 9 resembles 7, especially because of the near-identity of
the rhythm and the initial leap, although the interval of a fourth in unit 7 is augmented to a fifth in unit 9. But there is no doubting the family resemblance. Unit
12, the approach to the first high point, retains the ascending manner of paradigm
A but proceeds by step, in effect, filling in the initial fourth of unit 1, BE. Units
14 and 16, which open the third strophe, are more or less identical to 1 and 3, while
18 is identical to 12. This completes the activity within paradigm A.
The degree of recomposition in paradigm B is a little more extensive but not so
as to obscure the relations among segments. Unit 4 avoids a literal transposition of
2, as shown; in the process, it introduces a stepwise descent after the initial descent
of a third. The next derivation suggests that, while reproducing the overall manner
of units 2 and 1, unit 6 extends the durational and intervallic span of its predecessors. Unit 8 follows 1 quite closely, compressing the overall descent and incorporating the dotted rhythm introduced in unit 4. Unit 10 resembles 8 in terms
of rhythm and the concluding descent; even the dramatic diminished-seventh in
its second bar can be imagined as a continuation of the thirds initiated in unit 2.
Unit 10 may also be heard as incorporating something of the manner of paradigm
A, specifically, the stepwise ascent to the F-flat, which is inflected and transposed
from the last three notes of unit 1.
Unit 11 fills in the gaps in unit 2 (in the manner of units 4 and 6) but introduces another gap of its own at the end. Unit 13 is based on unit 4, but incorporates a key pitch, B-flat, from unit 2. Units 15 and 17 are nearly identical to 2 and
4, but for tiny rhythmic changes. Unit 19, the fallout from the climactic unit 18,
is derived from 13. Finally, unit 20 magnifies the processes in 19, extending the
initial descending arpeggio and following that with a descending stepwise line that
brings the melodic discourse to a full close.
Example 5.33 summarizes the affiliations of the units in Die Mainacht.
The basic duality of the song is evident in the two columns, A and B. The almost
Strophe 1
Column A units
1
3
5
7
9
Strophe 2
Strophe 3
12
14
16
18
Column B units
2
4
6
8
10
11
13
15
17
19
20
206
PART I
Theory
strict alternation between A units and B units is also noteworthy. Only twice
is the pattern brokenonce in the succession of units 10 and 11 (although the
affiliation between unit 10 and paradigm A units should discourage us from
making too much of this exception) and once at the very end of the song, where
the closing quality of paradigm B units is reinforced by the juxtaposition of
units 19 and 20.
Again, I do not claim that Brahms prepared analytical charts like the ones that I
have been discussing and then composed Die Mainacht from them. The challenge
of establishing the precise moments in which a composer conceived an idea and
when he inscribed that ideathe challenge, in effect, of establishing a strict chronology of the compositional process, one that would record both the written and the
unwritten texts that, together, resulted in the creation of the workshould discourage us from venturing in that direction. I claim only that the relations among the
works unitssome clear and explicit, others less clearspeak to a fertile, ever-present
instinct in Brahms to repeat and vary musical ideas and that a paradigmatic display of the outcome can be illuminating. While it is possible to recast the foregoing
analysis into a more abstract framework that can lead to the discovery of the unique
system of a work, this will take me too far afield. I will return to Brahms in a subsequent chapter to admire other aspects of this economy of relations in his work.
Conclusion
My aim in this chapter has been to introduce the paradigmatic method and to
exemplify the workings of a paradigmatic impulse in different stylistic milieus.
I began with God Save the King and explored questions of repetition, identity,
sameness, and difference, as well as harmonic continuity and discontinuity. I then
took up, in turn, discontinuity and form in a Mozart sonata movement, chronological versus logical form in a Chopin prelude, tonal models in a Beethoven quartet movement, and melodic discourse in a Brahms song. While I believe that these
analyses have usefully served their purpose, I hasten to add that there is more to
the paradigmatic method than what has been presented here. In particularand
reflecting some doubt on my part about their practical utilityI have overlooked
some of the abstract aspects of the method, for despite the intellectual arguments
that could be made in justification (including some made in these very pages), I
have not succeeded in overcoming the desire to stay close to the hearable aspects
of music. Thus, certain abstract or on paper relations that might be unveiled in
the Brahms song were overlooked. Similarly, the idea that each work is a unique
system and that a paradigmatic analysis, pursued to its logical end, can unveil the
particular and unique system of a work has not been pursued here. For me, there
is something radically contingent about the creation of tonal works, especially in
historical periods when a common musical language is spoken by many. The
idea that a unique system undergirds each of Chopins preludes, or each of the
movements of Beethovens string quartets, or every one of Brahms songs, states an
uninspiring truth about artworks while chipping away at the communal origins of
CHAPTER 5
Paradigmatic Analysis
207
musical works. But I hope that the retreat from this kind of paper rigor has been
compensated for by the redirecting of attention to the living sound.
I hope also to have shown that, the existence of different analytical plots notwithstanding, the dividing lines between the various approaches pursued in the
first part of this book are not firm but often porous, not fixed but flexible. Professionalism encourages us to put our eggs in one basket, and the desire to be rigorous compels us to set limits. But even after we have satisfied these (institutionally
motivated) intellectual desires, we remain starkly aware of what remains, of the
partiality of our achievements, of gaps between an aspect of music that we claim
to have illuminated by our specialized method and the more complex and larger
totality of lived musical experience.
PA RT I I
Analyses
C HA P T E R
Six
Liszt, Orpheus (18531854)
Although the shadows cast by words and images lie at the core of Liszts musical
imagination, his will to illustration, translation, or even suggestion rarely trumped
the will to musical sense making. For example, in rendering the Orpheus myth as a
symphonic poem or, rather, in accommodating a representation of the myth in symphonic form, Liszts way bears traces of the works poetic origins, but never is the
underlying musical logic incoherent or syntactically problematic. But what exactly is
the nature of that logic, that formal succession of ideas that enacts the discourse we
know as Liszts Orpheus? It is here that a semiotic or paradigmatic approach can prove
revealing. By strategically disinheriting a conventional formal template, a semiotic
analysis forces us to contemplate freshly the logic of sounding forms in Orpheus. The
idea is not to rid ourselves of any and all expectationsthat would be impossible,
of course. The idea, rather, is to be open-minded about the works potential formal
course. Accordingly, I will follow the lead of the previous chapter by first identifying
the works building blocks one after another. Later, I will comment on the discourse of
form and meaning that this particular disposition of blocks makes possible.1
Building Blocks
Unit 1 (bars 17)
A lone G breaks the silence. There are too many implications here to make active
speculation meaningful. We wait. We defer entirely to the composers will. Then,
the harp (our mythical hero) enters with an arpeggiation of an E-flat major chord.
1. In preparing this analysis, I found it convenient to consult a solo piano transcription of Orpheus
made by Liszts student Friedrich Spiro and apparently revised by the composer himself. The transcription was published in 1879. For details of orchestration, one should of course consult the full
score. Students should note that the numbering of bars in the Eulenberg score published in 1950 is
incorrect. Orpheus has a total of 225 bars, not 245 as in the Eulenberg score.
211
212
PART II
Analyses
These sounds seem to emanate from another world. The harp chord rationalizes
the repeated Gs: they are the third of the chord, a chord disposed in an unstable
second inversion. But the functional impulse remains to be activated. Indeed,
nothing is formed here. All is atmosphere and potentiality.
Unit 2 (bars 814)
The lone G sounds again, repeated from bars 13. This repetition invites speculation. Will the G be absorbed into an E-flat major chord as before, or will it find a
new home? We wait, but with a more active expectancy. The chord that rationalizes G is now an A7, duly delivered by the harp. Now G functions as a dissonant
seventh. Our head tone has gone from being relatively stable to being relatively
unstable. What might its fate be? If the harp chords mean more to us than spaceopening gestures, we will wonder whether they are related. Although linked by a
common G (the only common pitch between the successive three-note and fournote chords), the distance traveled in tonal terms is considerable. The chords lie
a tritone apart; in conventional terms, this is the furthest one can travel along the
circle of fifths (assuming this to be the normative regulating construct even in the
1850s). Between the sound of this diabolus in musica, the otherworldly timbre of
the harp, and the ominous knocking represented by the two Gs, we may well feel
ourselves drawn into supernatural surroundings.
Unit 3 (bars 1520)
With this third, fateful knock on the door, we are led to expect some direction,
some clarity, though not necessarily closure. Conventional gestural rhetoric recommends three as an upper limit. The first knock is the inaugural term; it is given
and must be accepted. The second reiterates the idea of the first; by refusing change,
it suggests a pattern, but withholds confirmation of what the succession means.
Finally, the third unveils the true meaning of the repetition; it confirms the intention and assures us that we did not mis-hear. (A fourth knock would risk redundancy and excess.) Clarity at last emerges in the form of a melodic idea. This will
be the main theme led by the note G. We now understand that the protagonist was
attempting to speak but kept getting interrupted. The theme is closed, descending
to 1 and sporting a cadential progression from I6. A relatively short phrase (5 bars
in length), its manner suggests an incremental or additive modus operandi. We
also begin to make sense of the tonal logic: E-flat and A lie equidistant on either
side of the main key (C); in effect, they enclose it. The main key is thus framed
tritonally although the approach to it is somewhat oblique. A certain amount of
insecurity underpins this initial expression of C major.
Unit 4 (bars 2126)
Since the beginnings of the previous unit and this one share the same chord albeit
in different positions (unit 3 opens with the C-seventh chord in 6/5 position, while
unit 4 begins with the chord in 4/2 position), we might wonder whether this unit
CHAPTER 6
Liszt
213
214
PART II
Analyses
CHAPTER 6
Liszt
215
216
PART II
Analyses
CHAPTER 6
Liszt
217
218
PART II
Analyses
CHAPTER 6
Liszt
219
220
PART II
Analyses
Form
With this first pass through Liszts Orpheus, I have identified its units or building blocks on the basis of repetition and equivalence. Each unit is more or less
clearly demarcated, although some units overlap, and while the basis of associating some may change if other criteria are invoked, the segmentation given
here confirms the intuition that Orpheus is an assembly of fragments. If we
now ask what to do with our 50 units, we can answer at two levels. The first,
more general level suggests a mode of utterance that is more speech-like than
song-like. (There is no dance here, or at least no conventional dance, unless
one counts the unpatterned movements that Orpheuss playing solicits.) As
a rule, when units follow one another as predominantly separate units, they
suggest speech mode; when they are connected, they are more likely to be in
song mode. The speech-mode orientation in Orpheus betrays its conception as
a symphonic poem, with obligations that originate outside the narrow sphere
of musical aesthetics. Another way to put this is to suggest that the illustrative,
narrative, or representational impetuswhichever it isconfers on the music
a more language-like character.
At a more detailed level, the foregoing exercise conveys the form of Orpheus
with clarity. Aligning the 50 units according to the criteria of equivalence adumbrated in the verbal description yields the design in figure 6.1.
The main difficulty with this kind of representation is that, by placing individual units in one column or another (and avoiding duplications), it denies
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
25
23
24
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
222
PART II
Analyses
affiliations with other units. But this is only a problem if one reads the material
in a given unit as an unhierarchized network of affiliations. In that case, rhythmic, periodic, harmonic, or other bases for affiliating units will be given free
rein, and each given unit will probably belong to several columns at once. But
the arrangement is not a problem if we take a pragmatic approach and adopt
a decisive view of the identities of units. In other words, if we proceed on the
assumption that each unit has a dominating characteristic, then it is that characteristic that will determine the kinds of equivalences deemed to be pertinent.
The less hierarchic approach seems to promote a more complex view of the
profile of each given unit, thus expanding the network of affiliations that is possible among units. It runs the risk, however, of treating musical dimensions as if
they were equally or at least comparably pertinent and therefore of underplaying the (perceptual or conceptual) significance of certain features. Yet, I believe
that there is often an intuitive sense of what the contextually dominant element
is, not only by virtue of the internal arrangement of musical elements, but on
the basis of what precedes or follows a given unit and its resemblance to more
conventional constructs.
As we have seen in similar analyses, several stories can be told about the
paradigmatic arrangement made above. One might be based on a simple statistical measure. Three ideas occur more frequently than others. The first is
the material based on unit 7, the neighboring figure expressed in a haunting,
implicative rhythm, which registers 15 times in figure 6.1. On this basis, we
may pronounce it the dominant idea of the movement. The second idea is the
codetta-like figure first heard as unit 18; this occurs 11 times. Third is the
main theme, heard initially as unit 3, and subsequently 7 more times. This
frequency of occurrence reflects the number of hits on our paradigmatic chart.
Had we chosen a different segmentation, we might have had different results.
Note, further, that the frequency of appearance is not perfectly matched to the
number of bars of music. Material based on unit 7 occupies 80 bars, that based
on 18 only 27 bars, and that based on 3 equals 39 bars. If we now consider
the character of each of these ideas, we note that the units based on 7 have
an open, implicative quality, serving as preparations, middles, transitions, or
simply as neutral, unsettled music. They are fragmentary in character and
thus possess a functional mobility. This is the material that Liszt uses most
extensively. Units based on the main theme occur half as much. The theme is
closed, of course, although in the segmentation given here, closure is present
at the end of each unit, thus missing out on the question-and-answer sense of
pairs of units. For example, the very first succession, units 34, may be read
not as question followed by answer but as answer followed by question. That
is, 3 closes with a tonic cadence while 4 modulates to the dominant. Had we
taken the 34 succession as an indivisible unit, we would have read the main
theme as open. But by splitting it into two, we have two closed units, although
closure in the second is valid on a local level and undermined on a larger one.
The third most frequently heard idea carries a sense of closure. Even though
it occupies only 27 bars, its presentation is marked in other ways. Then also, if
CHAPTER 6
Liszt
223
we add its occurrences to those of the main theme, we have a total of 66 bars,
which brings the material with an overall closing tendency closer to the 80-bar
mark attained by material with an open tendency.
In whatever way we choose to play with these figures, we will always be
rewarded with a view of the functional tendencies of Liszts material. And it is
these tendencies that should form the basis of any association of form. Of course,
the simple statistical measure will have to be supplemented by other considerations, one of which is placement. The main theme, for example, is introduced
early in the movement (units 36), disappears for a long while, and then returns
toward the end (3839 and 4445). The animating idea introduced in unit 7 enters
after the main theme has completed its initial work. It dominates a sizable portion of the continuation of the beginning (units 714) before being superseded.
It returns to do some prolonging work before the main theme returns (3437)
and stays toward the close of the work (units 4041 and 46). If the main theme
is roughly associated with beginning and end, the animating theme functions
within the middle and the end. The codetta theme (introduced as unit 18) dominates the middle of the work (units 1819, 2122, 2829, and 3132), lending a
strong sense of epilogue to the middle section. It returns toward the very end of
the movement (units 4749) to confer a similar sense of closure nowappropriatelyupon the works end.
The closing unit, 50, also invites different kinds of interpretation. The novelty of its chords places it immediately in a different column. By this reading,
the ending is entirely without precedent or affiliation. But I have pointed to
a sense of affiliation between unit 50, on the one hand, and units 1 and 2, on
the other. By the lights of this alternative reading, unit 50 may be placed in
the column containing units 1 and 2, so that the work ends where it began.
This particular affiliation would be the loosest of all, and yet it is supportable
from a gestural point of view. And since the key is C major, the home tonic,
the expression of tonic harmony in unit 3 comes to mind, for that was the first
composing out of what would turn out to be the main key. Again, these affiliations show that unit 50, despite its obvious novelty, is not entirely without
precedent. In this case, our paradigmatic chart does not do justice to the web
of connections.
The paradigmatic chart also sets units 1533 apart as a self-contained section.
Indeed, the lyrical theme introduced at unit 15 evinces a stability and well-formedness that are not found in the outer sections. Consider the mode of succession
of units 1524. Unit 15 begins the lyrical song, 16 prepares a broad cadence but
denies the concluding syntax (this is only the first try), and 17 achieves the actual
cadence. Unit 18 confirms what we have just heard, while 19 echoes that confirmation. Unit 19 meanwhile modifies its ending to engender another cadential
attempt, albeit in a new key. Unit 20 makes that attempt, succeeds (in bars 9192),
and is followed again by the confirming units 21 and 22. Another shift in key
leads to the third effort to close in unit 23. Its successor, 24, achieves the expected
closure, but now dispenses with the codetta idea that followed the cadences at
the end of units 17 and 20. This little drama across units 1524 unfolds with a
224
PART II
Analyses
sense of assurance and little sense of dependency. The self-containment and relative stasis confer on these units the spirit of an oasis. The idea of repeating such an
interlude could not have been predicted from earlier events, but this is precisely
what Liszt does. Thus, units 1524 are repeated more or less verbatim as 2533.
If the former provided temporary stability within a largely fragmented discourse,
the latter confirm that which demonstrated no inner need for confirmation. Of
course, we become aware of the repetition only gradually and only in retrospect,
and so the sense of redundancy is considerably mediated. At 33, matters are not
closed off as at 2324 but kept open in order to resume what we will come to know
as the process of recapitulation.
The view of form that emerges from a paradigmatic analysis recognizes repetition and return at several different levels, small and large. Some listeners will,
however, remain dissatisfied with this more open view of form, preferring to hear
Orpheus in reference to a standard form. Derek Watson, for example, writes,
Orpheus is akin to the classical slow-movement sonata without development.3
Richard Kaplan repeats this assertion: Orpheus has the sonata without development form common in slow movements, the only development takes place in
what [Charles] Rosen calls the secondary development section following the first
theme area in the recapitulation.4
Kaplan is determined to counter the image of a revolutionary Liszt who
invented an original genre (symphonic poem) that necessarily announces its
difference from existing forms. His evidence for the relevance of a sonata-form
model to Orpheus embraces external as well as internal factors. The fact that this
symphonic poem, like others, began life as an overture (it prefaced a performance
of Glucks Orfeo ed Euridice) and the fact that Liszts models for writing overtures
included works by Beethoven and Mendelssohn that use sonata form (this latter
fact, Kaplan says, is confirmed by Reicha and Czerny, both of whom taught Liszt)
lead Kaplan to search for markers of sonata form. He finds an exposition with two
key areas (C and E) and two corresponding theme groups in each area, no development, and a recapitulation in which theme Ib precedes Ia, IIa is recapitulated
first in B major and then in C major, themes Ia and IIb follow (again) in C major,
and then a closing idea and a coda round off the work.
The argument for a sonata-form interpretation is perhaps strongest in connection with the presence of two clearly articulated keys in the exposition and
the returnnot necessarily recapitulationof the themes unveiled during the
exposition. But in making a case for a sonata-form framework for Orpheus, Kaplan
is forced to enter numerous qualifications or simply contradictory statements. For
example, he says that three-part organization is the most consistent and logical explanation of large-scale form in [Faust Symphony/1, Prometheus, Les Preludes, Tasso, and Orpheus]. But Orpheus lacks one of these three parts, the central
development section. He finds many of [Liszts] usages . . . unconventional; the
CHAPTER 6
Liszt
225
introduction is very brief ; the repetition of the big lyrical theme (Kaplans theme
II) is non-standard; the reordering of themes in the recapitulation is subtle; the
use of themes in dual roles is unusual; there are several departures from classical tradition in the handling of tonality; and recapitulating the second theme
follows an unusual key scheme. These qualifications do not inspire confidence
in the pertinence of a sonata-form model. Granted, Kaplans study is of several
symphonic poems plus the opening movement of the Faust Symphony, and it is
possible that sonata form is more overtly manifested there than in Orpheus. Nevertheless, hearing Orpheus as a single-movement sonata structure is, in my view,
deeply problematic.
Listeners are more likely to be guided by the musical ideas themselves and their
conventional and contextual tendencies. The drama of Orpheus is an immediate
one: ideas sport overt values; return and repetition guarantee their meaningfulness. Attending to this drama means attending to a less directed or less prescribed
formal process; and the temporal feel is less linearly charged. Liszt marks time,
looks backward, even sideways. Succession rather than progression conveys the
pull of the form. Gathering these constituents into a purposeful exposition and
recapitulation obscures the additive view of form that our paradigmatic analysis
promotes.
An analysis altogether more satisfactory than Kaplans is provided by Rainer
Kleinertz.5 Kleinertzs main interest is the possible influence of Liszt on Wagner,
not from the oft-remarked harmonic point of view, but as revealed in formal procedures. Wagner himself had been completely taken with Orpheus, declaring it
in 1856 to be a totally unique masterwork of the highest perfection and one of
the most beautiful, most perfect, indeed most incomparable tone poems, and still
later (in 1875) as a restrained orchestral piece . . . to which I have always accorded a
special place of honor among Liszts compositions.6 Although Wagner appears to
have been especially taken with Liszts approach to form, especially his jettisoning
of traditional form and replacing it with something new, Kleinertzs essay concretizes the earlier intuitions in the form of an analysis.
From the point of view of the analysis presented earlier, the main interest in
Kleinertzs study stems from his interpretation of Orpheus as unfolding a set of
formal units. His chart of the overall form contains 18 units (all of the principal boundaries of his units coincide with those of my units, but his segmentation
takes in a larger hierarchic level than mine does). Noting (in direct contradiction
to Kaplan) that there is no resemblance to sonata form7 and that Orpheus has
no architectonic form,8 he argues instead that the whole piece seems to move
slowly but surely and regularly along a chain of small units, sometimes identical,
5. Rainer Kleinertz,.
6. Quoted in Kleinertz, Liszt, Wagner, and Unfolding Form, 234240.
7. Kleinertz, Liszt, Wagner, and Unfolding Form, 234.
8. Kleinertz, Liszt, Wagner, and Unfolding Form, 237.
226
PART II
Analyses
Meaning
Because they originate in a verbal narrative or plot or are earmarked for illustrative
work, symphonic poems have a twofold task: to make musical sense and to create
the conditions that allow listeners so disposed to associate some musical features
with extramusical ones. Both tasks subtend a belief about musical meaning. The
first is based in the coherence of meanings made possible by musical language and
syntax; the second is based on the prospect of translating that language or drawing
analogies between it and formations in other dynamic systems.
Although Orpheus resists a reading as a continuously illustrative work, certain
features encourage speculation about its illustrative potential. I mentioned some
of these in my survey of its building blocks. The disposition of the opening harp
chords and the lone G that announces them mimic the making of form out of
formlessness; they identify a mythical figure, our musician/protagonist, and they
convey a sense of expectancy. Even the tonal distance in these opening bars could
be interpreted as signifying the remoteness of the world that Liszt seeks to conjure.
The main theme, delivered by strings, carries a sense of the journeys beginning;
we will follow the contours of a changing landscape through its coming transformations. The ominous neighbor-note idea, with its sly, chromatic supplement,
is permanently marked as tendering a promise, pushing the narrative forward to
some unspecified goal. The beautiful lyrical theme in E major announces a new
9.
10.
11.
12.
13.
CHAPTER 6
Liszt
227
character or, rather, a new incarnation of the previous character. Unfolding deliberately, complete in profile, and burdened with closure, it provides an extended
tonal contrast that may well suggest the outdoors. The large-scale repetition of this
passage signifies a desire on the protagonists part to remain within this world, but
the return of the main theme is harder to interpret as an element of plot. Because
the associations are presumably fixed from the beginning and because verbal or
dramatic plots do not typically retrace their steps in the way musical plots do, the
fact of return forces the interpreter into a banal mode: hearing again, refusing to
go forward, recreating a lost world. Finally, the concluding magic chordsmysterious chords, in Kleinertzs descriptionmay well signify Orpheus in the underworld. They connote distance as well as familiarityfamiliarity stemming from
their affiliation with the works opening, with its floating chords marked destination unknown. We are always wiser in retrospect.
Listeners who choose to ignore these connotations, or who reject them on
account of their contingency, will not thereby find themselves bored, for everything from the thematic transformations through the formal accumulation of units
is there to challenge them and guarantee an engaged listening. But if we ask, how
do these competing perspectives promote a view of musical meaning? it becomes
apparent that the interpretive task poses different sorts of challenges. It is in a sense
easy to claim that the magic chords at the end of Orpheus signify something different or otherworldly, or that the big lyrical theme signifies a feminine world, or
that the harp is associated with the heavens, or even that chromaticism engenders
trouble, slyness, or instability. These sorts of readings may be supported by historical and systematic factors. They provide ready access to meaning, especially that
of the associative or extrinsic variety. But they are not thereby lacking a dimension
of intrinsic meaning, for the remoteness of the closing chords, for example, stems
more fundamentally from the absence of the regular sorts of semitonal logic that
we have heard throughout Orpheus. In other wordsand recognizing the fragility
of the distinctionintrinsic meaning is what makes extrinsic meaning possible.
It is within the intrinsic realm that musics language is formed. Its meanings are
therefore both structural and rhetorical at birth. They are, in that sense, prior to
those of extrinsic associationism. They may signify multiply or neutrally, but their
semantic fields are not infinite in the way that unanchored extrinsic meanings can
be. If, therefore, we opt for intrinsic meanings, we are expressing an ideological
bias about their priority, a priority that is not based entirely on origins but ultimately on the practical value of music making. This chapter has explored a mode
of analysis that seeks to capture some of that value.
C HA P T E R
Seven
How might we frame an analysis of music as discourse for works whose thematic
surfaces are not overtly or topically differentiated, the boundaries of whose building blocks seem porous and fluid, and whose tendencies toward continuity override those toward discontinuity? Two works by Brahms will enable us to sketch
some answers to these questions: the second of his op. 119 intermezzi and the
second movement of the First Symphony. Unlike Liszts Orpheus, whose building blocks are well demarcated, Brahms piano piece has a contrapuntally dense
texture and a subtle, continuous periodicity marked by a greater sense of throughcomposition. And in the symphonic movement, although utterances are often set
apart temporally, the material is of a single origin and the overall manner is deeply
organic. Dependency between adjacent units is marked. Nevertheless, each work
must breathe and deliver its message in meaningful chunks; each is thus amenable
to the kind of analysis that I have been developing. In addition to performing the
basic tasks of isolating such chunks and commenting on their affiliations, I hope
to shed light on Brahms procedures in general, as well as on the specific strategies
employed in these two works.
230
PART II
Analyses
A paradigmatic analysis conveys this division of labor by registering a striking difference in population of units: 22 in the A section, 19 in the A' section and a mere
8 in the B section.
Anyone who plays through this intermezzo will be struck by the constant
presence of its main idea. A brief demonstration of the structural origins of this
idea will be helpful in conveying the nature of Brahms harmonically grounded
counterpoint. At level a in example 7.1 is a simple closed progression in E minor,
the sort that functions in a comparable background capacity in works by Corelli,
Bach, Mozart, Beethoven, and Mendelssohnin short, a familiar motivating
progression for tonal expression. The progression may be decorated as shown at
level b. Level c withholds the closing chord, transforming the basic idea into an
incomplete utterance that can then be yoked to adjacent repetitions of itself. Level
d shows how Brahms opening enriches and personalizes the progression using
temporal displacements between treble and bass. This, then, is the kind of thinking that lies behind the material of the work.
In preparing a reference text for analysis (example 7.2), I have essentialized
the entire intermezzo as a two-voice stream in order to simplify matters. (A few
irresistible harmonies are included in the middle section, however.) Example 7.2
Example 7.1. Origins of the main idea in Brahms, Intermezzo in E Minor, op. 119,
no. 2.
p s.v. e dolce
CHAPTER 7
Brahms
231
Bars:
10
10
14
18
16
17
20
22
19
20
26
12
14
15
18
24
11
12
13
16
21
28
22
30
23
24
E - pedal
32
36
34
25
40
26
42
27
E - pedal
44
48
50
28
56
59
68
66
32
70
36
1.
62
31
30
2.
52
29
37
72
39
80
42
93
40
41
44
45
89
86
47
35
81
43
85
34
74
38
76
46
33
91
48
96
97
49
100 E - pedal
should thus be read as a kind of shorthand for the piece, a sketch whose indispensable supplement is the work itself. Units are marked by a broken vertical line and
numbered at the top of each stave. Bar numbers are indicated beneath each stave.
232
PART II
Analyses
Unit 1 (bar 1)
A tentative utterance progresses from tonic to dominant and exposes the main
motif of the work. This idea will not only dominate the outer sections but will
remain in our consciousness in the middle section as well. The intermezzo thus
approaches the condition of a monothematic work. This building block is open,
implyingindeed, demandingimmediate continuation.
Unit 2 (bars 12)
Overlapping with the preceding unit, this one is essentially a repetition, but the
melodic gesture concludes differentlywith an exclamation (BD) that suggests
a mini-crisis. Shall we turn around and start again, or go onand if so, where?
Unit 3 (bars 23)
This is the same as unit 1. Brahms chose to start again.
Unit 4 (bars 37)
Beginning like 1 (and 3), this unit replaces D-sharp with D-natural on the downbeat
of bar 5, hints at G major, but immediately veers off by means of a deceptive move
(bass DD-sharp) toward B major, the home keys major dominant. Arrival on B
major recalls unit 2, but here the utterance is louder and rhetorically heightened.
Unit 5 (bars 79)
Lingering on the dominant, this unit extends the dominant of the dominant just
triumphantly attained, but it effects a modal reorientation toward the minor. The
main motive continues to lead here.
Unit 6 (bar 9)
This is the same as unit 1.
Unit 7 (bars 910)
This repeats the material from unit 2.
Unit 8 (bars 1011)
This is the same as 1 (and 3 and 6).
Unit 9 (bars 1112)
Five chromatic steps (B to E) provide a link to A minor from E minor. In this transitional unit, the E-minor triad is turned into major and, with an added seventh,
functions as V7 of A minor to set up the next unit.
CHAPTER 7
Brahms
233
234
PART II
Analyses
CHAPTER 7
Brahms
235
236
PART II
Analyses
by this last incorporation of G-natural, which recalls the join between the A and B
sections (bars 3335). The energy in the main motive is neutralized somewhat.
Unit 31 (bars 7172)
This unit is the equivalent of unit 1. As typically happens in ternary structures
like this, the reprise (starting with the upbeat to bar 72) reproduces the content
of the first A section with slight modifications. My comments below will mainly
acknowledge the parallels between the two sections.
Unit 32 (bars 7273)
This is the same as unit 2.
Unit 33 (bars 7374)
This is equivalent to unit 3.
Unit 34 (bars 7475)
This unit proceeds as if recapitulating unit 9, but it curtails its upward chromatic
melodic rise after only three steps.
Unit 35 (bars 7576)
This is the proper equivalent of unit 9, differently harmonized, however.
Unit 36 (bars 7677)
This unit is a rhythmic variant of unit 10. The process of rhythmic revision replaces
the earlier triplets with eighths and sixteenths, bringing this material more in line
with the original rhythm of the main theme. We may make an analogy between
this process and the tonic appropriation of tonally other material during a sonataform recapitulation.
Unit 37 (bars 7778)
Equivalent to 11, this unit is also heard as an immediate repeat of 36.
Unit 38 (bars 7981)
This is the equivalent of 12, with rhythmic variation.
Unit 39 (bars 8182)
This unit is equivalent to 13.
CHAPTER 7
Brahms
237
238
Analyses
PART II
Example 7.3. Paradigmatic chart of the units in Brahms, op. 119, no. 2.
1
3
6
10
12
11
13
14
15
17
16
18
19
20
22
21
23
24
25
26
28
31
27
29
30
32
33
34
35
36
38
37
39
41
40
42
43
44
45
46
47
48
49
CHAPTER 7
Brahms
239
gradually while the main idea remains as a constant but variable presence. Novelty
is charted by the following succession: unit 2; then units 45; then 9; then 12; then
14; then 1719; and finally 22. The degree of compositional control is noteworthy.
A similar strategy is evident in the middle section, where 23, 25, and 28 serve as
springboards that enable departures within the span of units encompassing this
section (2330).
A third feature conveyed in the paradigmatic analysis is the pacing of utterances. In the B section, for example, there are only 8 units, compared to 41 in
the combined A sections. This conveys something of the difference in the kinds
and qualities of the utterances. Much is said in the A and A' sections; there is a
greater sense of articulation and rearticulation in these sections. The speech
mode predominates. Fewer words are spoken in the B section, where the song
mode dominates. As narration, the intermezzo advances a self-conscious telling in
the A section, pauses to reflect and sing innerly in the B section, and returns to
the mode of the A section in the A' section, incorporating a touch of reminiscence
at the very end.
The paradigmatic analysis, in short, makes it possible to advance descriptive
narratives that convey the character of the work. In principle, these narratives do
not rely on information from outside the work, except for a generalized sense of
convention and expectation drawn from the psychology of tonal expression. They
are bearers, therefore, of something like purely musical meaning. Some listeners
may choose to invest in other sorts of meaning, of course; the fact, for example,
that this is a late work may engender speculation about late style: conciseness,
minimalism, essence, subtlety, or economy as opposed to flamboyance, extravagance, determination, and explicitness. The title Intermezzo might suggest an
in-betweenness or transience. I am less concerned with these sorts of meanings
than with those that emerge from purely musical considerations. The kinds of
meaning that ostensibly formalist exercises like the foregoing make possible have
not, I feel, been sufficiently acknowledged as what they are: legitimate modes of
meaning formation. They do not represent a retreat from the exploration of meaning. As I have tried to indicate, and as the following analysis of the slow movement
of the First Symphony will further attest, formalist-derived meanings are enabling
precisely because they bring us in touch with the naked musical elements as sites
of possibility. Reticence about crass specification of verbal meaning is not an evasion, nor is it a deficit. On the contrary, it is a strategic attempt to enable a plurality
of inference by postponing early foreclosure. Boundaries, not barriers, are what we
need in order to stimulate inquiry into musical meaning. Indeed, not all such acts
of inferring are appropriate for public consumption.
240
PART II
Analyses
CHAPTER 7
Brahms
241
IV. Crucial to the meaning of this phrase is the retention of the pitch G-sharp (3)
in the melody. Although we hear the phrase as somewhat self-contained, we also
understand that it remains in a suspended state; it remains incomplete. Conventionally, G-sharps ultimate destination should be E (1). We carry this expectation forward. The difference in effect between the two framing tonic chords
(with 3 in the top voice) reminds us of the role of context in shaping musical
meanings. The 3 in bar 1 is a point of departure; the 3 in bar 2 is a point of
arrival. A return to the point from which we embarked on our journey may suggest a turnaroundwe have not gone anywhere yet, not realized our desire for
closure. So, although the progression in unit 1 is closed, its framing moments
carry a differential degree of stability in the work: the second 3 is weaker than
the opening 3.
Unit 2 (bars 3142)
This 2-bar phrase answers the previous one directly. The sense of an answer is
signaled by parallel phrase structure. But to gloss the succession of 2-bar phrases
as question and answer is already to point to some of the complexities of analyzing musical meaning. These 2 bars differ from the preceding 2-bar unit insofar
as they are oriented entirely to the dominant. Unit 1 progressed from I through
V and back to I; this one prolongs V through a secondary-dominant prefix. To say
that unit 2 answers 1, therefore, is too simple. Musical tendencies of question and
answer are complexly distributed and more than a matter of succession. The close
of unit 2 on the dominant tells us that this particular answer is, at best, provisional.
The answer itself closes with a question, a pointer to more business ahead. Unit 2
is thus simultaneously answer (by virtue of phrase isomorphism and placement)
and question (by virtue of tonal tendency).
A number of oppositions are introduced in this succession of units 1 and 2:
diatonic versus chromatic, closed versus open, disjunct versus conjunct motion,
major versus minor (conveyed most poignantly in the contrast between G-sharp
in unit 1 and G-natural in unit 2), and active versus less active rates of harmonic
change. The more we probe this juxtaposition, the more we realize that the overall
procedure of units 1 and 2 is not a causal, responsorial 2 + 2 gesture, but something
less directed, less urgent in its resultant cumulative profile. When we think of classical 4-bar phrases divided into 2 and 2, we think, at one extreme, of clearly demarcated subphrases, the second of which literally answers the first. But the degree to
which the subphrases are independent or autonomous varies from situation to
situation. Nor is this a feature that can always be determined abstractly; often, it
is the force of context that underlines or undermines the autonomy of subunits.
In Brahms opening, the horns enter with a threefold repeated note, B (bar 2), to
link the two phrases. These Bs function first as an echo of what has gone before, as
unobtrusive harmonic filler that enhances the warmth of tonic harmony, and as
elements embedded spatially in the middle of the sonority, not at its extremities,
where they are likely to draw attention to themselves. The three Bs not only point
backward, they point forward as well. They prepare unit 2 by what will emerge
242
PART II
Analyses
CHAPTER 7
Brahms
243
244
PART II
Analyses
Then, revoicing the lower parts of unit 13, Brahms enhances their essential identity. These units are the first two of a threefold gesture that culminates in a high
point on the downbeat of bar 22 (unit 14) with a high B. As mentioned earlier, this
moment of culmination also features a return of the melody from unit 1 as a bass
voice, providing a spectacular sense of textural fusion at a rhetorically superlative
moment.
Units 15 (bars 233251), 16 (bars 252272)
High points invariably bring on the end. The downbeat of bar 22 carries a promise of closure. Unit 15 reclaims the material of unit 2, but instead of advancing to
a half-close as in the comparable gesture in bar 12, it simply abandons the first
attempt at cadence and proceeds to a second, successful attempt (16). The means
are direct: Brahms brings back material we have heard twice previously (bars 34
and 1516), withholds its continuation, and finally allows it. The anacruses that
shaped previous occurrences are present here too, though they now take the form
of a descending triad (bars 23 and 25). The long-awaited cadence in bars 2627 is
engineered by this figure. Looking back, we now see how Brahms has led us gradually but inexorably to this point.
Figure 7.1 Paradigmatic arrangement of units 116 in Brahms Symphony
No.1/ii.
1
4
5
6
7
8
9
10
11
12
13
14
15
16
Let us review the A section from the point of view of paradigmatic affiliation
(figure 7.1). The strategy is immediately apparent. Narrating rather than associating seems to be the process. We begin with a sequence of four sufficiently differentiated ideas (units 14). Then, we dwell on the fourth one for a bit (5, 6), add a new
idea (7), which we play with for a while (8, 9). Yet another new idea is introduced
(10). Its talewhich, in this interpretation, embodies its essential identityis
immediately repeated (11). The last of our new ideas follows (12) and is repeated
CHAPTER 7
Brahms
245
twice in progressively modified form (13 and 14). Finally, by way of conclusion,
we return to an earlier idea, unit 2, for a twofold iteration (15 and 16). Overall,
the process is incremental and gradual. Brahms music unfolds in the manner of
a speech discourse, complete with asymmetrical groupings and subtle periodicity.
The idea of musical prose captures this process perfectly.
246
PART II
Analyses
imitation, this extension of temporal units is Brahms way of meeting the normative need for contrast and development in this portion of the movement.
Units 23 (bars 491501), 24 (bars 501503), 25 (bars 511521), 26 (bars 521532)
Units 20, 21, and 22 were grouped together on account of their morphological similarities. What follows in the next four units is a cadential gesture that leads us to
expect a close in D-flat major, the enharmonic relative of C-sharp. Brahms makes
much of the gesture of announcing and then withholding a cadence. Thus, unit 23
prepares a cadence, but the onset of 24, while syntactically conjunct, dissolves the
desire for a cadence by replaying part of the melody that dominated units 20, 21,
and 22. Again, 25 repeats 23, evoking the same expectation, but 26 avoids the 6/4
of unit 24 and sequences up a step from 25, at the same time extending the unit
length. The cumulative effect is an intensification of the desire for cadence.
Unit 27 (bars 532553)
We reach a new plateau with the arrival on a sforzando G-sharp at 532, the beginning of unit 27. The familiar sequence of featuresa long note followed by a doodling descentis soon overtaken by a fortissimo diminished-seventh chord (bar
553), a chord that (again, in retrospect) marks the beginning of the next unit.
Units 28 (bars 553571), 29 (bars 571573), 30 (bars 581592)
The beginning of the end of the movements middle section is signaled strongly
by the diminished-seventh chord at 553. This is not a moment of culmination as
such; rather, it sounds a distant note and then gradually diminishes that distance.
It carries the aura of a high point by virtue of rhetoric, but it is achieved more by
assertion than by gradual accumulation. Indeed, it is the staged resolution in its
immediate aftermath that confirms the markedness of this moment.
Units 28, 29, and 30 are thematic transformations of each other. Unit 30 is the
most intricate from a harmonic point of view. After 30, the need for a cadence
becomes increasingly urgent, and over the next six units, the obligation to close the
middle section is dramatized by means of Brahms usual avoidance strategy.
Units 31 (bars 592.5601), 32 (bars 601.5611), 33 (bars 611612), 34
(bars 613621), 35 (bars 622631)
Speech mode intrudes here as the units become notably shorter. Unit 31 ends
deceptively, while 32 avoids a proper cadence by dovetailing its ending with the
beginning of 33. Units 33 and 34 are materially identical but are positioned an
octave apart; the connection between them is seamless despite the timbral changes
associated with each group of four sixteenth-notes. And unit 35 is left to pick up
the pieces, echoing the contour of the last three notes of 34 in a durational augmentation that suggests exhaustion.
CHAPTER 7
Brahms
247
19
20
21
22
23
24
25
26
27
28
29
30
31 (truncated)
32
33
34
35 (fragment) 36
We may now summarize the affiliations among the units of the B section (see figure 7.2). Immediately apparent are two features: first, but for units 19 and 36, each
vertical column contains a minimum of two and a maximum of five elements.
This indicates a strategy of use and immediate reuse of material, the extent of
which exceeds what we heard in the A section. If the A section seemed linear in its
overall tendency, the B section incorporates a significant circular tendency into its
linear trajectory. Second, but for the succession 2627, where we return to previous
material, the overall tendency in this section is to move forward, adding new units
and dwelling on them without going back. This, again, promotes a sense of narration. The process is additive and incremental, goal-directed and purposeful.
While the A and B sections share an overall incremental strategy, there
are differences, the most telling being a sense of return in the A section (for
example, units 1516 belong with unit 2 in the same paradigmatic class), which
is lacking in the B section. In other words, a sense of self-containment and
248
PART II
Analyses
46 = (based on) 9
47 = (based on) 10
48 = 10
49 = 47 = 10
50 = 48 = 10
51 = 12
52 = 13
53 = 14
54 = 15
55 = 16
CHAPTER 7
Brahms
249
38
39 (truncated) 40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
is the recasting in unit 51 of the memorable oboe melody from unit 12. Its closing
quality is enhanced by doubling and by the incorporation of a triplet figure in the
accompaniment.
see Jason Stell, The Flat-7th Scale Degree in Tonal Music (Ph.D.
3. For an in-depth study of flat-7,
diss., Princeton University, 2006).
250
PART II
Analyses
section to the A' section, the appearance of flat-7 (D-natural in bar 64) intensifies the progression to the subdominant. Meanwhile, solo cello adds an epigrammatical quality to this moment, while the E pedal ensures the stasis that reeks of
homecoming. Texturally, the music rises from a middle to a high register across 2
bars (101102), then Brahms repeats the gesture (102104) with the slightest of
rhythmic decorations in unit 57.
Units 58 (bars 10431053), 59 (bars 10611063), 60 (bars 10711073), 61 (bars
10811083), 62 (bars 10911113)
One way of enhancing the retrospective glance of a coda is to return to earlier,
familiar material and use it as the basis of a controlled fantasy, as if to signal dissolution. Brahms retrieves the motive from bars 2123 (unit 14, itself affiliated
with the two preceding units and with the opening unit of the work by virtue of its
GAC bass motion) for use in units 5862. The process features varied repetition of 1-bar units in each of bars 105, 106, 107, 108, and 109, this last extended
for another 2 bars. The 3-bar unit 62 functions as the culmination of the process,
as a turning point, and as an adumbration of the return to the opening material.
Indeed, while the last bar of unit 62 (111) is equivalent to the last bar of unit 1 (bar
2), the preceding bars of the two units are different.
Unit 63 (bars 11211142)
Unit 2, with its prolonged dominant, returns here to continue this final phase, the
ending of the ending, so to speak. Dark and lugubrious in bar 112, the material
brightens on a major-mode 6/4 chord (bar 113), reversing the effect of the minor
6/4 coloration that we have come to associate with this material. Again, the longterm effectiveness of this modal transformationa modal fulfillment, we might
sayis hard to underestimate.
Unit 64 (bars 11411161)
Another plateau is reached with this resolution to tonic. As if imitating the beginning of the coda (bars 100101), the melody descends through a flattened-seventh.
We recognize the accompaniments dotted figure from bar 28, where it announced
the beginning of the B section. Here, it is incorporated into a reminiscence of past
events. Brahms puts a little spoke in the wheel to enhance the desire for tonic:
in bar 116, an EA tritone halts the serene, predictable descent and compels a
relaunching of the momentum to close.
Unit 65 (bars 11631181)
With this reuse of the chromatic material from unit 3, we infer that the protagonist
will have to make his peace with this chromatic patch. The effect is one of intrusion, as at the first appearance, but the presence of directed chromatic motion in
CHAPTER 7
Brahms
251
the previous unit mutes the effects of quotation or intrusion. There is a sense that
we have entered a dream worldonly for a moment, of course.4
Unit 66 (bars 11811202)
Only in retrospect are we able to interpret bar 118 as the beginning of this unit.
This is because the link across the bar line in 117118 is conjunct rather than disjunct. Here, we are treated to the same bass descent of fifths that we encountered
in units 10 (CFBE), 48, and 50 (GCFB), bringing us once more
to the tonic (bar 120). Something about the melody of this unit evokes a sense of
pastness, the same sense that we associated with units like 3. Overall, the freight of
retrospection introduced in unit 56 (bar 100) continues to hang here.
Unit 67 (bars 12021222)
This material is a repeat of 64, with slight modifications of texture and registral
placement; it encounters the same tritonal crisis at the end of the unit.
Unit 68 (bars 1223128)
One more intrusion of the chromatic segment finds its final destination in a clean
E major chord, the goal of the movement. The final approach incorporates a strong
plagal element (bars 123124), not unlike the link between the B and A' sections.
Five bars of uninflected E major provide the phenomenal substance for an ending,
inviting us to deposit all existing tensions here and to think of heaven.
Figure 7.4 Paradigmatic arrangement of units 5668 in Brahms Symphony
No.1/ii.
56
57
58
59
60
61
62 (extended) 63
64
65
67
68 (extended)
66
252
PART II
Analyses
A paradigmatic arrangement of the coda taken on its own would look like figure 7.4. Again, the stacking of the column with units 5862 conveys the extensive
use of repetition as the movement nears an end; this curbs the narrative tendency
by withholding novelty. Indeed, the coda uses several prominent materials from
earlier in the movement. Units 5862, for example, are based on the first bar of the
main material of the movement, unit 1. Unit 63 is equivalent to unit 2, while units
65 and 68 are equivalent to unit 3, the latter extended to close the movement. There
is nothing new under the sun.
What, then, has the foregoing semiotic analysis shown? As always with analysis, a verbal summary is not always testimony to its benefits, for what I hope to
have encouraged is hands-on engagement with Brahms materials. Nevertheless,
I can point out a few findings. I began by invoking Walter Frischs remark that
this movement is a supreme example of Brahmsian musical prose. The movement
is indeed segmentable, not according to a fixed, rule-based phrase structure but
on the basis of contextual (some might say ad hoc) criteria, that is, according to
what Brahms wishes to express in the moment, from phrase to phrase and from
section to section. I sought to characterize each bit of material according to its
locally functional tendency and then to see what constructional work it is doing
in the course of the movement. In some cases, the paradigmatic analysis simply
confirmsor gives external representation tolong-held intuitions about musical
structure. For example, the idea that closing sections feature extensive reiteration
of previously heard material is conveyed by the affiliations among units 5862.
The narrative-like sense of the A section is conveyed by the number of new ideas
that Brahms exposes there. The different pacing in utterance between the A and B
sectionsroughly, the prose-like nature of the A section against the initially verselike nature of the contrasting B section, which, however, returns to prose from
about unit 28 onwardis underwritten by the rhythm of units.
Enumerating these structural functions, however, does not adequately convey the sensual pleasure of analyzing a movement like thisplaying through it at
the piano, juxtaposing different units or segments, imagining a different ordering,
writing out some passages on music paper, and listening to different recordings. In
one sense, then, no grand statement is necessary because the process is multifaceted, multiply significant, and dedicated to doing. To have undertaken the journey
is what ultimately matters; the report of sights seen and sounds heard signifies
only imperfectly and incompletely.
C HA P T E R
Eight
Mahler, Symphony no. 9/i (19081909)
253
254
PART II
Analyses
structure, the musical and the extramusical, or even structuralism and hermeneutics have not been rendered totally irrelevant in contemporary musicological
discourse. The differences they promulgate lie at the very root of almost all theorization of musical meaning.
The main intellectual labor in this book has been geared toward explicating the
directly musical qualities while letting the spirit of the music emerge from individual or even personal narratives drawn from these qualities. My reticence may
seem evasive, but is intended to be strategic. It is motivated in part by the recognition that the range of responses provoked by the spirit of the music is irreducibly
plural and unavoidably heterogeneous. Plurality and heterogeneity, however, speak
not to unconstrained speculation butmore pragmaticallyto the nature of our
(metalinguistic and institutional) investments in critical practice. Nor do I mean
to imply that the directly musical qualities finally admit of homogeneous characterization. I believe, however, that the degree of divergence within the domain of
technical characterization is significantly narrower than the diversity within characterizations of spirit. This difference does not in itself carry a recommendation
about what is more meaningful to explore. Since, however, part of the burden of
this book has been to encourage (spiritual) adventures emanating from the observation of technical procedures, I will retain that stance in the analysis that follows.
Directly musical qualities will be constructed from the specific viewpoint of the
use of repetition.
Let us turn, then, to a work that is not exactly lacking in extensive analytical commentary.2 Indeed, some analysessuch as that by Constantin Florosare
implicitly paradigmatic. My own approach differs only to the extent that it is more
explicit in this regard. As before, I will follow a two-step analytical procedure.
First, I will identify the building blocks or units in the entire movement; second,
I will explore some of the affiliations among the building blocks. It is at this latter
stagethe stage of dispositio, we might saythat matters of discourse will come
to the fore.
2. Henry-Louis de La Grange provides a useful synthesis of the major analytical studies of the Ninth
while offering his own original insights. See Gustav Mahler: A New Life Cut Short (19071911)
(Oxford: Oxford University Press, 2008), 14051452. See also Stephen Hefling, The Ninth Symphony, in The Mahler Companion, ed. Donald Mitchell and Andrew Nicholson (Oxford: Oxford
University Press, 2002), 467490. Dniel Bro emphasizes timbre in the course of a holistic appreciation of the movement in Plotting the Instrument: On the Changing Role of Timbre in Mahlers
Ninth Symphony and Weberns op. 21 (unpublished paper).
CHAPTER 8
Mahler
255
of Brahms E Minor Intermezzo, op. 119, no. 2, which we studied in the previous chapter. In terms of gesture, Mahlers movement is perhaps closest to Liszts
Orpheus, but the building blocks there are set apart more distinctly than in Mahler.
Yet the same general principles of segmentation apply. Units must be meaningful and morphologically distinct. Repetition may be exact or varied (with several
stages in between). Where it involves tonal tendencies, voice-leading paradigms, or
thematic identities, repetition must be understood with some flexibility in order to
accommodate looser affiliations. For example, if two units begin in the same way,
or indicate the same narrative intentions, and conclude in the same way (syntactically) without, however, following the same path to closure, they may be deemed
equivalent on one level. This loosening of associative criteria is necessary in analyzing a fully freighted tonal language; it is also necessary in order to accommodate Mahlers intricate textures and to register their narrative tendency. As we will
see, a number of units carry an intrinsic narrative disposition. Narration is often
(though by no means always) melody-led and imbued with tendencies of beginning, continuing, or ending.
Related criteria for segmentation are contrast and discontinuity. Contrast may
be expressed as the juxtaposition of dense and less dense textures, the alternation
and superimposition of distinct tone colors or groups of such colors, opposing or
distantly related tonal tendencies, or differentiated thematic gestures.3 We typically
encounter contrast from the left, that is, prospectivelyas if we walked into it. Not
all apparent gestures of discontinuity sustain the designation of contrast, however.
When the collective elements of difference separating two adjacent units seem to
outnumber or be more forcefully articulated than the collective elements of sameness, we are inclined to speak of discontinuity. Neither continuity nor discontinuity
can be absolute, however.
In short, segmentation is guided by the tendency of the material: its proclivities and its conventional and natural associations. Listening from this point of
view involves attending to the unfolding dynamic in an immediate sense even
while recognizing associations with other moments. We listen forward, but we also
entertain resonances that have us listening sideways and backward.
It should be noted that the boundaries separating adjacent units are not always
as firm as our segmentation might sometimes imply. Processes within one unit
may spill over into the next. Sometimes, newness is known only in retrospect.
And the conflation of beginnings and endings, a procedure that goes back to the
beginnings of tonal thinking, is often evident in Mahler. Units may be linked by a
fluid transitional process whose exact beginning and ending may not be strongly
marked. The boundaries indicated by bar numbers are therefore merely convenient
signpostsprovisional rather than definitive indicators of potential breaks. Listeners should not be denied the opportunity to hear past these boundaries if they so
desire; indeed, such hearing past is well-nigh unavoidable during a regular audition
3. On contrast in Mahler, see Paul Whitworth, Aspects of Mahlers Musical Style: An Analytical Study
(Ph.D. diss., Cornell University, 2002). On the composers manipulation of tone color, see John
Sheinbaum, Timbre, Form and Fin-de-Sicle Refractions in Mahlers Symphonies (Ph.D. diss.,
Cornell University, 2002).
256
PART II
Analyses
of the work. I ask only that listeners accept the plausibility of these boundaries for
the purposes of analysis.
Paradigmatic Analysis
I have divided the movement into 33 units. Let us make a first pass through the
movement by describing their features and tendencies. Later, I will speculate on
form and meaning. (From here on, the reader needs access to a full score of the
movement in order to verify the description that follows.)
Unit 1 (bars 163)
The feeling of emergence that we experience in these opening bars (see the piano
score of these bars in example 8.1) will be transformed when first violins enter
2 melodic gesture that will be
at the end of bar 6 with F-sharp followed by E, a 3
repeated immediately and, in due course, come to embody the narrative voice. The
opening bars are fragmentary and timbrally distinct, and they lack an urgent or
purposeful profile. It will emerge in retrospect that these bars constitute a metaphorical upbeat to the movements beginning proper. Their purpose is to expose
a number of key motifs: a syncopated figure or halting rhythm played by cellos
and horns; a tolling-bell figure played by the harp (which includes a 025 trichord
6
suggesting pentatonic affiliation); a sad phrase played by horns based on a 5
2
4
melodic gesture (56 frames bars 34 and then is stated directly in bars 5 62);
and a rustling or palpitation in the viola (which, although figured functionally as
accompaniment, is nevertheless essential to the movements discourse). The summary effect is atmospheric and descriptive; there is a sense that nothingness is
being replaced.4
Example 8.1. Opening bars of Mahlers Ninth, first movement.
Andante comodo
Unit 1 not only sets out the principal motives of the movement, it will return in
a recomposed guise to begin the so-called development section (unit 10). It will also
be heard at the climax of the movement (unit 24). Nothing in this initial presentation allows us to predict subsequent functions and transformationsnothing, perhaps, except the embryonic manner of the beginning, which suggests a subsequent
4. I have borrowed Deryck Cookes descriptive phrases for unit 1 because they seem particularly apt.
See Cooke, Gustav Mahler: An Introduction to His Music (London: Faber, 1980), 116117.
CHAPTER 8
Mahler
257
presentation of more fully formed ideas. Significant, too, are the relative autonomy
and separateness of the passage (bars 16 were crossed out of Mahlers autograph
score at one point during the compositional process; they were later restored), its
spatial rather than linear manner, and its fascinating and unhierarchic display of
timbres in the manner of a Klangfarbenmelodie. If, in spite of the units spatial tendencies, one is able to attend to its overall tonal tendency, then pitch-class A will
emerge as anchor, as a source of continuity throughout the passage.
Unit 2 (bars 64175)
The main theme (or subject) is exposed here. (Example 8.2 quotes the melody.)
The material is in song mode, unfolds in a leisurely manner, and carries an air of
nostalgia. The unhurried aura comes in part from the grouping of motives: two
notes and a rest, two notes and a rest, threefold articulation of two notes and a
rest, fivefold articulation of four notes followed by a rest, three notes and a rest, the
same three notes and a rest, and finally an expansive gesture in which seven notes
are offered in 13 attack points. The melody seems to go over the same ground even
as it gradually and subtly breaks out of its initial mold.
Example 8.2. Main theme of Mahlers Ninth, first movement, bars 716.
etc
melodic gesture that will dominate the movement. The sound term 32 is a promise, of course, because it is syntactically incomplete. This is not a uniquely Mahler 2 bears over a century of conventional use, most famously in the
ian rendition; 3
first movement of Beethovens Les Adieux Sonata, op. 81a, and in Der Abschied
2 appears first as a beginfrom Mahlers own Das Lied von der Erde. In this unit, 3
ning; then, in bars 1418 (the ending of the unit), it appears in the context of an
ending.
The once-upon-a-time quality conveyed by the string melody and its horn
associates suggests a movement of epic proportions. Among other things, the ostinato bass in bars 913 underlines the largeness of the canvas. Significant, too, is the
muting of leading-tone influence such that the harmonic ambience, pandiatonic
rather than plainly diatonic, acquires an accommodating feel rather than a charged
profile. Among other features, 6 is incorporated into the tonic chord; some sonorities contain unresolved appoggiaturas while others suggest a conflation of tonic
and dominant functions. Unit 2 ends incompletely. The dialogue between second
violins (carrying the main melody) and horns (singing a countermelody in bars
1417) is dominated by gestures promising closure. The listener carries forward an
expectation for eventual fulfillment.
258
PART II
Analyses
CHAPTER 8
Mahler
259
as we will see, continues from where the 23 pair left off, introducing a new level
of narration that, however, is soon truncated. Mahlerian narrative as displayed in
these first five units is based on networks of activity in which leading ideas and
processes vie for attention. While it is of course possible to extract a Hauptstimme,
claims for melodic priority are made by different instrumental parts. What we are
dealing with, then, are degrees of narrativity (as Vera Micznik calls them).5 For
example, the simple act of melodic telling in units 2 and 3 is transformed into a
more communal activity in unit 4. Unit 1, likewise, effected the manner of a constellation, a communality by default rather than by design. (Im talking not about
the composers intentions but about the tendency of the musical material.) There
is, as always in Mahler, a surplus, an excess of content over that which is needed to
establish a narrative line.
Unit 5 (bars 466542)
This appearance of the main theme incorporates motifs from the prelude (unit
1), including the palpitating sextuplet figure introduced in bar 5 by the violas and
entrusted now to double basses and bassoons, and the sad phrase announced by
the horns in bars 45, which is now given to the trumpets (bars 4950). The music
makes a deceptive close in bars 5354 by means of a Vvi motion, thus opening
up the flat side of the tonal spectrum. We recall the juxtaposition of D major and
minor in units 34 and project the thought that similar dualities will carry a fair
amount of the movements dynamic.
The aftermath of the cadence in bars 5354 carries a strong sense of codetta
at first, but a new melodic idea in a different tonal environment (bars 57 onward,
in B-flat major) confers a sense of beginning rather than ending. This conflation
of functions is partly why I have located the beginning of unit 6 in bar 54; the join
between units 5 and 6 is porous.
Unit 6 (bars 543633)
Although some of the motivic elements on display here have been adumbrated
in previous units, the overriding tendency is that of dream or recallas if a lost
song (sung by second violins and flutes) were filtering through. The moment has
the character of a parenthesis, a tributary to the narrative. This feature will emerge
even more forcefully when oboes and violins barge in at bar 63 to resume the
main narration. We may suspect that a certain amount of tonal business is being
transacted here (the key of B-flat lies a third away from D), but we are not yet in
a position to know for sure. The ending of this unit illustrates Mahlers penchant
for problematizing conventional syntax: what begins as a 6/4-induced cadential
gesture in B-flat (bars 6263) is abandoned or cut short. We may with some confidence speak of an interruption or even a disjunction between units 6 and 7. (Ana-
5. Vera Micznik, Music and Narrative Revisited: Degrees of Narrativity in Mahler, Journal of the Royal
Musical Association 126 (2001): 193249.
260
PART II
Analyses
lysts inclined to find continuity will, of course, succeed: the pitch B-flat provides a
link between the two units.)
Unit 7 (bars 6479)
This begins as another version of the main theme, featuring the promise or fare 2.
Continuation is modified, however, to lead to extended closure.
well motif, 3
From bar 71 onward, a network of closing gestures within diatonic polyphony generates expectations for something new. We may well feel that the work of the main
theme (as heard in units 2, 3, 5, and now 7) is nearly done and that something new
needs to happen to counter the pervasive sense of stasis. The neat juxtaposition of
major and minor in bars 77 and 78, respectively, sums up the modal history of the
movement so far without, however, giving the impression that unit 7 is capable of
resolving all of the tensions accumulated.
Unit 8 (bars 8091)
This unit starts on, rather than in, B-flat. Among other previously heard motifs, it
incorporates the movements main contrasting material, the chromatic idea first
introduced as unit 4, where it was grounded on D. Here, the ground shifts to
B-flat, but the melodic pitches remain the samea use of pitch invariance that
is not common in Mahler. A strong dominant sense in 8687 conveys the imminence of closure. All of the signs are that we will close in B-flat, a key that has made
only sporadic appearance so far (most notably in unit 6), but this expectation is
not fulfilled. What seems pertinent here, and is entirely in keeping with Mahlers
metamusical impulses, is not the tonicization of a specific scale degree but a more
primal desire for some sort of cadential articulation. Mosaic-like motivic construction thus combines with cadential suggestiveness to mark this unit as an
on-its-way gesture.
Unit 9 (bars 92107)
The closing gesture introduced at the end of the previous unit is intensified with this
marked off-beat passage (example 8.3), which will return at subsequent moments
of intensification. Reinforced by a cymbal crash and underpinned by a circle-offifths progression (example 8.4), the unit prepares a cadence in B-flat. At bar 98,
a subdominant-functioning chord further signals closure, but once again normative continuation is withheld. In other words, while the spirit of intensification is
kept up, the conventional supporting syntax is denied. Eventually, material marked
allegro (bar 102) rushes the unit to a dramatic conclusion in bar 107. This moment
is marked for consciousness by its 6/3 harmony on B-flat. It is something of a
transgressive gesture, for in place of the longed-for stable cadential close, Mahler
supplies an unstable ending. Indeed, the 6/3 chord points, among other things, to
a possible E-flat cadence (see the hypothetical progression in example 8.5). An
upbeat rather than a downbeat, a precadential harmony rather than a cadential
close, this moment reeks of abandonment. Mahler writes in a double bar at 107 to
CHAPTER 8
Mahler
261
Example 8.3. Intensifying phrase, Mahlers Ninth, first movement, bars 9295.
becomes
mark off a major segment of the work. Were this not also the end of the exposition,
the strategic violence of refusing forward drive would be less intense.
Taking stock: the alternation between the main theme (in D major) and the
subsidiary theme (in D minor) establishes a large-scale structural rhythm that leads
the listener to imagine a work of large proportions. So far, the mosaic-like circulation of motifs together with the absence of purposeful tonal exploration have combined to undermine any expectations we might entertain for clear sonata-form
articulation. The movement in fact forges its own path to understanding. We finish
the exposition with the summary understanding that a diatonic, nostalgic idea
(the main theme), repeated in varied form, alternates with a tormented (Cookes
word) chromatic idea, which retains some pitch invariance in repetition while
adapting to new tonal situations. We are also haunted by a third, subsidiary element, the B-flat major theme (unit 6), which opened the door to the otherworldly.
A paradigmatic arrangement of the units would look like figure 8.1.
262
PART II
Analyses
2
3
5
7
6
8
The arrangement confirms that things are weighted toward the main theme
(a fourfold occurrence, units 2, 3, 5, and 7) and that the contrasting theme makes
a twofold appearance (units 4 and 8). Of course, there is more to the formal process than this external disposition of materials. For example, a teleological process
embodied in the 2357 sequence contributes an accumulative sense, but this is
not readily observable from a simple paradigmatic arrangement. Similarly, affiliations between the relatively autonomous opening unit (1) and subsequent units
(perhaps most notably, units 8 and 9) may not be readily inferred. The experience
of form in Mahler is often a complex business, depending as much on what is not
said as on what is said and how it is said. To take Mahler on his own is to risk an
impoverished experience. Even in a work like the first movement of the Ninth,
where intertextual resonance is less about thematic or topical affiliation, an internally directed hearing will still have to contend with various dialogues. For example, without a horizon of expectations shaped by sonata form, rondo form, and
theme and variations, one might miss some of the subtle aspects of the form. These
categories are not erased in Mahler; rather, they are placed under threat of erasure,
by which I mean that they are simultaneously present and absent, visible but mute.
Mahler demands of his listeners the cultivation of a dialogic imagination.
Unit 10 (bars 1081293)
Immediately noticeable in this unit is the activation of the speech mode. Motivic
development often takes this form since it in effect foregrounds the working
mode, the self-conscious and ongoing manipulation of musical figures without the
constraint of regular periodicity. The unit is marked at the beginning by a strong
motivic association with the opening of the work: the syncopated figure leads off
(bars 108110), followed by the tolling-bell motif in bars 111112 and in the following. Soon, other elements join in this loose recomposition of unit 1. Manipulation, not presentation, is the purpose here. From the sparseness of texture, we
infer that several of the motifs seem to have traveled a considerable expressive
distance. The working mode gives way to a codetta sense around bar 117, where
a new figure in oboes, English horn, violas, and cellos conveys the gestural sense
of closure without, however, supplying the right syntax. This sense of closing will
persist throughout units 10 and 11 before being transformed into an anticipatory
feeling at the start of unit 12 (bar 136). In a sense, units 10, 11, and 12 constitute a
CHAPTER 8
Mahler
263
264
PART II
Analyses
is tonally unstable in spite of the flirtation with G major from bar 182 on. Proceedings intensify from bar 174 onward. Previously heard motives are laboriously
incorporated in a full contrapuntal texture, and this working modea struggle of
sortscontinues through the rest of the unit, reaching something of a local turning point at the beginning of the next unit.
Unit 15 (bars 196210)
As in its previous appearance, this off-beat, syncopated passage, underpinned by
bass motion in fifths, signals intensification, culminating this time in the first high
point of the development (bar 202). Again, the bass note is D, and so the high point
is produced not by the tension of tonal distance but by activity in the secondary
parameters of texture and dynamics. In the aftermath of the high point, sixteenthnote figures derived from the movements contrasting idea (heard also in unit 10,
bars 12124) are incorporated into the accompaniment, preparing its melodic
appearance in the next unit.
Unit 16 (bars 2112461)
The affectively charged material that functions as the main contrasting material in
the movement appears once again. When it was first heard (unit 4), it sported a D
minor home. Later, it appeared with the same pitches over a B-flat pedal (unit 8).
On this third occasion, pitch invariance is maintained, and the B-flat pedal of the
second occurrence is used, but the modal orientation is now minor, not major.
This material signals only the beginning of a unit whose purpose is to advance
the working-through process. It does so not, as before, by juxtaposing different
themes in a mosaic-like configuration, but by milking a single theme for expressive
consequence. Throughout this unit, string-based timbres predominate and lines
are often doubled. (The string-based sound provides a foretaste of the finale, while
also recalling the finale of Mahlers Third Symphony.) In its immediate context,
the materials signal an act of cadencing on a grand scale. The approach in bar 215
is not followed through, however; nor does the local high point at 221 discharge
into a cadence. Finally, at 228, the syntax for a cadence on B-flat is presented in the
first two eighth-notes, but nothing of the durational or rhetorical requirements
for such a cadence accompanies the moment. The goal, it turns out, is none other
than the tonic of the movement, and this cadence occurs at 235236, although the
momentum shoots past this moment and incorporates a series of reminiscences
or postcadential reflection.
Although key relationships are explored from time to time in this movement
(as Christopher Lewis has shown),6 the overall drama does not depend fundamentally on purposeful exploration of alternative tonal centers. The tonicbe it
in a major or minor guiseis never far away; indeed, in a fundamental sense, and
despite the presence of third-related passages in B-flat major and B major, the
6. Christopher Lewis, Tonal Coherence in Mahlers Ninth Symphony (Ann Arbor, MI: UMI Press, 1984).
CHAPTER 8
Mahler
265
movement, we might say, never really leaves the home key. It concentrates its labor
on thematic, textural, and phrase-structural manipulation.
Unit 17 (bars 2454266)
The tremolo passage from unit 12 returns to announce a coming thematic stability.
2 gesture, the augmented triad, and the
A constellation of motifs (including the 3
chromatic descent) accompanies this announcement.
Unit 18 (bars 26642792)
The main theme is given melodramatic inflection in the form of a solo violin utterance (bars 269270). Solo timbre sometimes signals an end, an epilogue perhaps;
sometimes the effect is particularly poignant. This overall sense of transcendence
is conveyed here even as other motifs circulate within this typical Mahlerian constellation.
Unit 19 (bars 2792284)
Previously associated with moments of intensification, this syncopated passage
appears without the full conditions for intensification. This is because the preceding unit was stable and assumed a presentational function. (A continuing or
developmental function would have provided a more natural or conventional
preparation.) Only in its last 2 bars (277278) was a token attempt made to render
the beginning of the next unit nondiscontinuous. In one sense, then, this intensifying unit functions at a larger level, not a local one. It harks back to its sister passages and reminds us that, despite the sweet return of the main theme in bar 269,
the business of development is not yet finished. In retrospect, we might interpret
unit 18 as an interpolation.
Unit 20 (bars 28442951)
An ascending, mostly chromatic bass line in the previous unit (EFAA
A) discharges into the initial B major of this one. Horn and trumpet fanfares
activate the thematic dimension, as does the sad phrase from the opening bars of
the work, now in a decidedly jubilant mood (bar 286).
Unit 21 (bars 2952298)
We single out this 4-bar phrase as a separate unit because we recognize it from previous occurrences. In context, however, it is part of a broad sweep begun in unit 20
that will reach a climax in unit 24. Part of what is compositionally striking about
this moment is that a gesture that seemed marked in its three previous occurrences
(units 11, 15, 19) now appears unmarked as it does its most decisive work. Here, it
is absorbed into the flow, neutralized by name, so to speak, so that we as listeners
can attend to the production processes.
266
PART II
Analyses
CHAPTER 8
Mahler
267
11
12
17
13
14
15
18
19
21
16
20
22
23/1
23/2
24
7. Unit 23 is divided into two because the parts are affiliated with different materials. It is, however,
retained as a single unit because its overall gesture seems continuous and undivided.
268
PART II
Analyses
the temporal and experiential realms. Temporal novelty in turn denies material
sameness. But unmitigated difference is psychologically threatening, for without
a sense of return, without some anchoring in the familiar, music simply loses its
ontological essence. Mahler understood this keenly, following in the spirit of his
classical predecessors. But, like Brahms before him, recapitulation for Mahler was
always a direct stimulus to eloquence, artistic inflection, and creativity. We never
say the same thing twice. To do so would be to lie. Hearing a recapitulation, then,
means attending to sameness in difference or, rather, difference in sameness. While
our scheme of paradigmatic equivalence glosses over numerous details, it nevertheless orients us to gross levels of sameness that in turn facilitate individual acts
of willed differentiation.
Unit 26 (bars 35643652)
Unit 25 is repeated (making the 2526 succession the equivalent of the earlier 23),
beginning in a higher register and with the first violins in the lead. The period ends
with a deceptive cadence on B-flat as flattened-sixth (bars 364365), thus leaving things open. The drive to the cadence incorporates a rising chromatic melody
(bars 363365) reminiscent of bars 4446, but the cadence is deceptive rather than
authentic.
Unit 27 (bars 36523722)
The pattern of bass notes suggests that we hear this unit as an expanded parenthesis, a dominant prolongation. Beginning on B-flat, the music shifts down through
A to G-sharp (bar 370) and then back up to A as V of D. The themes are layered. A
version of the main theme occurs in the violas and cellos, while a subsidiary theme
from bar 54 (cellos) is now elevated to the top of the melodic texture. The last 2
bars of this unit (371372) derive from unit 7 (bars 6970). In short, if unit 26
offered a sense of tonic, unit 27 prolongs that tonic through chromatic neighbors
around its dominant.
An aspect of the production process that this unit reveals is Mahlers attitude
toward recapitulation. In certain contexts, reprise is conceived not as a return to the
first statement of a particular theme but as a return to the paradigm represented
by that theme. In other words, a recapitulation might recall the developmental version of a theme, not its expositional version, or it may recall an unprecedented but
readily recognizable version. In this way, the movements closing section incorporates references to the entire substance of what has transpired. This is Mahlerian
organicism at its most compelling.
Unit 28 (bars 37223762)
The bass note A serves as a link between units 27 and 28, finding resolution to D
in the third bar of this unit. The thematic field being recapitulated is the tormented
idea first heard as unit 4. This lasts only 4 bars, however, before it is interrupted by
a spectacular parenthesis. Meanwhile, the brasses recall the fanfare material.
CHAPTER 8
Mahler
269
270
PART II
Analyses
then minorunstable because, while the underlying syntax is there, the rhetorical
manner is too fragmentary to provide a firm sense of a concluding tonic.
Unit 32 (bars 4063433)
The movement seems fated to end in chamber-music mode! A change of tempo, a
reduction in orchestral forces, and a turning up of the expressivity dial all combine
to suggest closing, dying, finishing. A single horn intones the syncopated idea that
we have associated with moments of intensification (bar 4082), and this is succeeded by fanfares and sighs. Between 406 and 414, the harmony remains on D;
then, it shifts for 2 bars on to its subdominant, a moment that also (conventionally)
signifies closure, before getting lost again in a miniflute cadenza (bars 419433).
An E-flat major chord frames this particular excursion (419432), and although
it may be read as a Neapolitan chord in the home key, hearing it as such would
be challenging in view of the way we come into it. The retention of a single voice
during the closing process is a technique that Mahler will use spectacularly in the
closing moments of the opening movement of his Tenth Symphony.
Unit 33 (bars 434454)
The main theme returns for the last time. This is the stable, foursquare versionor
so it begins, before it is liquidated to convey absolute finality. The most spectacular
feature of this unit is the choreographing of closure through a sustained promise
2 motion will eventually find its 1.
within a single bar (444), then 32 across a bar line with longer note values (4463
2 spread over 6 bars (2 occupies 5 of those bars). And just when
4472), and finally 3
the listener is resigned to accepting a syntactically incomplete gesture as a notional
ending, the long-awaited 1 finally arrives in the penultimate bar of the movement
(453), cut off after a quarter-note in all instruments except clarinet, harp, and high 2
1 gesture for which we have been waiting since the beginning
lying cellos. The 3
of the movement finally arrives. But, as often with Mahler, the attainment of 1 is
problematized by two elements of discontinuity, one timbral, the other registral. A
conventional close might have had the oboe reach the longed-for 1 on D, a major
second above middle C in bar 453. But D arrives in two higher octaves simultaneously, played by flute, harp, pizzicato strings, and cellos. And, as if to emphasize
CHAPTER 8
Mahler
271
the discontinuity, the oboes extend their D into the articulation of 1 by the other
instruments on the downbeat of bar 453, thus creating a momentary dissonance
and encouraging a hearing that accepts 2 as final. Of course, the higher-placed
Ds (in violins and violas) and longer-lasting ones (in flutes and cellos) dwarf the
oboes E, so the ultimate hierarchy privileges 1 as the final resting place. It is hard
to imagine a more creative construction of an equivocal ending.
Listeners who have not forgotten the material of the movements opening unit
may be struck by how it is compressed in the last two units. At the beginning,
a syntactic element in the form of a dominant prolongation provided the background for a free play of timbres and a mosaic-like exhibition of motives. Only
when the narration proper began in the second violin in unit 2 did the various
strands coalesce into a single voice. In these last bars, much is done to deny the
integrity of simple melodic closure; indeed, it may even be that not until we experience the silence that follows the weakly articulated Ds in bars 453454 are we
assured that a certain contrapuntal norm has been satisfied.
Form
By way of summary, and again recognizing the limitation of this mode of representation, we may sketch the larger shape of the movement as in figure 8.3.
Several potential narratives are enshrined in this paradigmatic chart. And this
is an important property of such charts, for although they are not free of interpretive bias, they ideally reveal conditions of possibility for individual interpretation.
To frame the matter this way is to emphasize the willed factor in listening. This is
not to suggest that modes of interpretation are qualitatively equal. For example,
some may disagree about the placement of boundaries for several of the units isolated in the foregoing analysis. Indeed, in a complex work like the first movement
of Mahlers Ninth, giving a single labelas opposed to a multitude of labels in a
network formationmay seem to do violence to thematic interconnectedness and
the numerous allusions that constitute its thematic fabric. The issue is not easily
resolved, however, because segmentation resists cadence-based classical rules. To
say that the movement is one continuous whole, however, while literally true on
some level, overlooks the variations in intensity of the works discourse. It seems
prudent, then, to steer a middle courseto accept the idea of segmentation as
being unavoidable in analysis and to approach the sense units with flexible criteria.
What has been attempted here is a species of labeling that recognizes the potential
autonomy of individual segments that might make possible a series of associations.
In the end, paradigmatic analysis does not tell you what a work means; rather, it
makes possible individual tellings of how it means. Those who do not mind doing
the work will not protest this prospect; those who prefer to be fed a meaning may
well find the approach frustrating.
The most literal narrative sanctioned by the paradigmatic approach may be
rehearsed concisely as follows. Preludial material (1) gives way to a main theme
(2), which is immediately repeated (3) and brought into direct confrontation with
272
PART II
Analyses
a contrasting theme (4). The main theme is heard again (5) followed by a subsidiary theme (6). The main theme appears again (7) and now heads a procession that
includes the contrasting theme (8) and a new idea that functions as an intensifier
(9). The preludial idea returns in a new guise (10), followed by the first subsidiary
theme (11) and yet another new idea (12)all in the manner of development or
working through. The main theme is heard again (13), followed by the expositions
subsidiary and intensifying themes (14, 15), the contrasting idea (16), and the second subsidiary theme (17). Again, the main theme is heard (18), followed, finally,
by its most prolonged absence. Starting with the intensifying theme (19), the narrative is forwarded by materials that seem distinct from the main theme (20, 21,
22, 23/1). Units 23/2 and 24 merge into each other as the preludial material returns
in a rhetorically heightened form (24). This also marks the turning point in the
movements dynamic trajectory. The rest is recall, rhyme, flashback, and the introspection that accompanies reflection, following closely the events in the first part
(25, 26, 27, 28, 29, 30, 31, 32, 33), while incorporating an extended parenthesis in
the form of a cadenza (29).
CHAPTER 8
Mahler
273
274
PART II
Analyses
an intensifier at important junctures. Its first occurrence is at the end of the exposition (9). Then, as often happens in Beethoven, an idea introduced almost casually or offhandedly at the end of the exposition becomes an important agent in
the exploratory business of development. Mahler uses this idea four times in the
course of the development, making it the most significant invariant material of
the development. This is perhaps not surprising since the unit displays an intrinsic
developmental property. It is worth stressing that, unlike the main and contrasting themes, this syncopated idea lacks presentational force; rather, it is an accessory, an intensifier.
One factor that underlines the movements coherence concerns the role of unit
1, a kind of source unit whose components return in increasingly expanded forms
as units 10 and 23/224. Unit 1 exposes the movements key ideas in the manner
of a table of contents. Unit 10 fulfills a climactic function while also initiating a
new set of procedures, specifically, the deliberate manipulation of previously held
(musical) ideas. Finally, unit 23/2 marks the biggest high point of the movement.
The trajectory mapped out by the succession of units 11023/224 is essentially
organic. Unlike the main theme, which in a sense refuses to march forward, or
perhaps accepts that imperative reluctantly, the syncopated rhythm that opens the
work is marked by a restless desire to go somewhere (different).
The paradigmatic chart is also able to convey exceptions at a glance. Columns
that contain only one item are home to unduplicated units. There is only one such
element in this movement: the misterioso cadenza (unit 29) in the recapitulation,
which does not occur anywhere else. This is not to say that the analyst cannot trace
motivic or other connections between unit 29 and others; the augmented triad at
bar 3784, for example, is strongly associated with the tormented material we first
encountered as unit 4; indeed, this material will return at the start of the next unit
in bar 391. It is rather to convey the relative uniqueness of the unit in its overall
profile. Within the development space, four units seem materially and gesturally
distinct from others in the movement. They are 12 and 17 and to a lesser extent
20 and 22. Units 12 and 17 are preparatory, tremolo-laden passages, complete with
rising chromaticism that intrinsically signals transition or sets up an expectation
for a coming announcement or even revelation. Although they incorporate the
brass fanfare from unit 4 (bars 4445), units 20 and 22 feature a marked heroic
statement in B major that, alongside the syncopated intensifying passage (units 19,
21, 23), prepares the movements climax (unit 23).
Also evident in the paradigmatic chart is the nature of community among
themes. The prelude and main theme are associated right at the outset (12); they
are also allied at the start of the recapitulation (2425) but not in the development
section. The main theme and the contrasting idea are strongly associated throughout the exposition, but not as strongly in the development or recapitulation; they
find other associations and affinities.
It is also possible to sense a shift in thematic prioritization. Units 6 and 9
appeared in a subsidiary role in the exposition. During the development unit, 6
assumed a greater role, while unit 9 took on even greater functional significance.
In the recapitulation, they returned to their earlier (subsidiary) role (as 27 and 32),
making room for the 34 pair to conclude the movement.
CHAPTER 8
Mahler
275
Meaning
The ideal meaning of the first movement of Mahlers Ninth is the sum total of
all of the interactions among its constituent elementsa potentially infinite set
of meanings. Although there exists a relatively stable score, implied (and actual)
performances, and historically conditioned performing traditions, there are also
numerous contingent meanings that are produced in the course of individual
interpretation and analysis. Acts of meaning construction would therefore seek a
rapprochement between the stable and the unstable, the fixed and the contingent,
the unchanging and the changing. In effect, they would represent the outcome of
a series of dialogues between the two. Refusing the input of tradition, convention,
and style amounts to denying the contexts of birth and afterlife of the composition,
contexts that shape but do not necessarily determine the specific contours of the
trace that is the composition. At the same time, without an individual appropriation of the work, without a performance by the analyst, and without the speculative acts engendered by such possession, the work remains inaccessible; analysis
ceases to convey what I hear and becomes a redundant report on what someone
else hears.
Of the many approaches to musical meaning, two in particular seem to dominate contemporary debate. The first, which might be dubbed intrinsic, derives from
a close reading of the elements of the work as they relate to each other and as
they enact certain conventions. Analysis focuses on parameters like counterpoint,
harmony, hypermeter, voice leading, and periodicity, and it teases out meaning
directly from the profiles of their internal articulations and interactions. One way
to organize the mass of data produced by such analysis is to adopt an umbrella
category like closure. As we will see, the first movement of Mahlers Ninth provides many instances of closure as meaningful gesture. (Intrinsic is affiliated with
formalist, structuralist, and theory-based approaches.)
The second approach to meaning construction is the extrinsic; it derives from
sources that appear to lie outside the work, narrowly defined. These sources may be
276
PART II
Analyses
CHAPTER 8
Mahler
277
with V as an open unit, then continues and ends with the same tendency. In
the case of units 1516, intensification does lead to resolution; or rather, resolution involves a unit marked by instability. Therefore, while the sense of that
particular succession (1516) is of tension followed by release, the latter is not
the pristine, diatonic world of the main theme but the more troubled, restless,
and highly charged world of the subsidiary theme. In the successions of units
1920 and 2122, closure is attained, but an element of incongruity is set up
in the case of 1920 because the key prepared is E-flat major (bars 281283)
while the key attained lies a third lower, B major (bar 285). Although B major
is reached by means of a chromatic bass line, it is, as it were, approached by the
wrong dominant. This is immediately corrected, however, in the case of units
2122, where the intensifying phrase leads to closure in its own key. Given the
proximity of the occurrences, units 1922 represent a large moment of intensification, a turning point. In other words, the local tension-resolution gestures
I have been describing are agents in a larger tension-creating move within the
macro form.
The most dramatic use of this intensifying phrase is embodied in the succession of units 2324, which parallel 910 in syntactic profile but differ fundamentally from a rhetorical point of view. Unit 23/1 is the third occurrence of the phrase
within a dramatically accelerating passage, the culmination of a stretto effect produced by the rhythm of the phrases temporal placement. This last then discharges
into the movements climax, units 23/224, which, as we have noted, replay the
syncopated rhythm from the opening of the work. There is no more dramatic
moment in the movement.
The intensifying phrase seems spent. Not surprisingly, it appears only once
more in the movement, completely expressively transformed (unit 32). The great
stroke in this ending lies in the conjunction of units 3233. The cadence-promising unit 32 finally finds resolution on the stable main theme of the movement.
Perhaps this is the destination that has been implied from the beginning, from
the first hearing of unit 9. If so, the listener has had to wait a very long time for
the phrase to find its true destinationas ending, as ultimate fulfillment, perhaps
as death.
Listeners who base their understanding of the form of the movement on the
trajectory mapped out by this intensifying phrase will conclude that this is one of
Mahlers most organic compositions. True, the phrase works in conjunction with
other material, so the organicism is not confined to one set of processes. Indeed,
the large-scale dynamic curve that the phrase creates is partly reinforced and
partly undermined by concurrent processes. Reinforcement comes, for example,
from the expanding role of the prelude (units 1, 10, and 24). On the other hand,
the recurrences of the main theme lack a patent dynamic profile. The theme seems
to sit in one place; it returns again and again as if to assure us of its inviolability.
Meaning in Mahler is at its most palpable when dimensional processes produce
this kind of conflict in the overall balance of dimensional tendencies. By focusing
on the production processes, paradigmatic analysis facilitates the construction of
such meanings.
278
PART II
Analyses
Narrative
Mahlerian narrative is melody-led. The leading of melody takes a number of
forms, ranging from a simple tune with accompaniment to a more complex texture in which a Hauptstimme migrates from one part of the orchestra to another.
Melody itself is not necessarily a salient tune but a more diffuse presence that
is understood as embodying the essential line or idea within a given passage.
Melody-led narration occurs on several levels, from the local to the global. What
such tellings amount to may be divined differently and put to different uses. At the
most immediate level, narrative points to the present in an active way; it shows
the way forward. Its meaning is hard to translate satisfactorily out of musical
language, but it can be described in terms of pace and material or rhetorical style
(which includes degrees of emphasis, repetition, and redundancy and a resultant
retrospective or prospective quality).
Mahlerian narrative typically occurs on more than one level, for the leading of
melody cannot in general be understood without reference to other dimensional
activities. Since it would be tedious to describe all of the facets of narrative in
Mahler, let me simply mention a few salient features.
The melody played by first violins as the main theme starting in bar 7 (quoted
in example 8.1) is in song mode, although its little motivic increments, by eschewing the long lines found elsewhere, suggest the halting quality of speech mode.
Notice that the horns are in dialogue with the strings and thus effect a complementary song mode. Coming after a spatially oriented (as distinct from a temporally
oriented) introduction which had no urgent melodic claims (bars 16), this song
signifies a beginning proper. An important feature of narrative phrases is how they
endwhether they conclude firmly by closing off a period or remain open and
thus elicit a desire for continuation.
How might we describe the narrative progress of this melody? An opening
2,
with a closing or downward tendency, is repeated. It continues with
idea, 3
the same rhythmic idea (incorporating an eighth-note) but changes direction and
heads up. This idea, too, is repeated in intensified form, incorporating an appoggiatura, B-natural. The pitch B takes on a life of its own, acquiring its own upbeat.
This gesture is repeated. These two little phrases mark the tensest moments in the
arc generated by the melody so far. Resolution is called for. Next comes an expansive phrase, marked espressivo by Mahler, which fulfills the expectations produced
so far, engendering a high point or superlative moment. This culminating phrase
4
3
2 pattern, the last two scale degrees
ends where the narration began with a 5
literally reclaiming the openness of the melodys point of departure. Whether one
hears a half-cadence, or the shadow of a half-cadence, or a circular motion, or an
abandoned process, it is clear that the process of the phrase is kept open. We hear
2 again, even though it now carries the sense of an echo, as if we had begun a
3
codetta, and it is then repeated in slightly modified form. The entire phrase carries a powerfully unified sense, leaving no doubt about which voice is the leading
one. We could complicate the foregoing description by pointing to the motivic
interplay (among horns, clarinets, bassoons, and English horn) that animates this
more primal string melody. However, these only contribute a sense of narration as
CHAPTER 8
Mahler
279
a more communal affair; they do not undermine the fact of narration in the first
place.
First violins literally take over in bar 18 as the tellers of the tale. We know that
the tale is the same but also that the teller is new. Novelty comes from the change
of register, beginning roughly an octave higher than where the previous period
began. The new teller will not merely repeat the previous telling; she must establish her own style of saying the same thing. The use of embellishments facilitates
this personalization of the narrators role. But the pacing of motivic exposition is
kept the same. Perhaps this phrase will be more expressive on account of its being
2
1 close (bars 2425), which fulfills
a repeat. Its most dramatic feature is the 3
the promise made in the previous phrase. Fulfillment is tinged with equivocation,
however. A structural dissonance at the phrase level arises as we move into an
adjacent higher register. Registral imbalance will need to be resolved eventually.
2 motive is kept alive, underAfter the attainment of 1 (bar 25), the promissory 3
2 in the major is
mining any sense of security brought on by this close. The 3
replaced by 32 in the minor to signal a new thematic impulse. What unit 3 tells is
mostly the same as what was told in unit 2, but the difference lies in the fact that
unit 3 resolves some of the cadential tension exhibited in the previous unit while
introducing its own new tensions. In this way, the process of the music is kept
open and alive. Parallel periods do not merely balance one another; antecedents
and consequents may provide answers on one level, but new questions often arise
when old ones are being laid to rest. The direct pairing of units 2 and 3 draws us
into this comparative exercise.
C HA P T E R
Nine
282
PART II
Analyses
form. But the expectations associated with sonata form are rather complex, for
surely Beethovens understanding of the form was an evolving one, not a fixed
or immutable one. Analysis errs when it fails to hypostatize such understanding, when, relying on a scheme fixed on paper, it fails to distinguish Beethovens
putative understanding of sonata form in, say, 1800 from his understanding in
1823. To the extent that an invariant impulse was ever enshrined in the form, it
resides in part in an initial feeling for a stylized contrast of key (and, to a lesser
extent, thematic material), which is then reconciled in an equally stylized set of
complementary moves. From here to the specifics of op. 130 is a long way, however.
In what follows, I will retain sonata-form options on a distant horizonas place
markers, perhapswhile concentrating on the movements materials and associated
procedures.
As a point of reference for the analysis, examples 9.19.15 provide a sketch
of the main materials of the movement broken into 15 sections and comprising a
total of 79 units, including subdivisions of units. These are mostly melodic ideas,
Example 9.1. Units 111 of Beethoven, String Quartet in B-flat Major, op. 130, first
movement.
1
2
3
2a
2b
3a
10
3b
3c
4
12
16
10
11
CHAPTER 9
283
and they are aligned in order to demonstrate their affiliations. Although several
additional units produced by nesting, recomposition, or motivic expansion could
be incorporated, the segmentation undertaken here should be adequate for a first
pass through the movement.
Exposition
Units 1 (bars 122) and 2 (bars 2242)
The movement opens with a complementary pair of units. The first unit begins
as a unison passage and is transformed after the fourth note into a full-voiced
chorale, ending as a question mark (on V). The second answers the first directly,
retaining the four-part harmonization and finishing with a perfect cadence, albeit
of the feminine rather than masculine variety. This pairing of units is both
closed and open. Within the harmonic and phrase-gestural domain, the succession is closed and balanced; within the registral domain, however, it remains open
because unit 2 lies in a higher register. Posing a question in a lower register and
answering it in a higher one introduces an incongruity or imbalance that provokes registral manipulation later in the movement. Note also that the contrast
between unharmonized (opening of unit 1) and harmonized (unit 2) textures
evokes a parallel contrast in modes of utterance. Something of the speech mode
may be inferred from the forced oneness of unison utterance; the ensuing hymn
brings on communal song.
Units 2a (bars 4352), 2b (bars 5371)
Using the gesture at the end of unit 2 as a point of departure, unit 2a moves the
melodic line up and 2b takes it to F through E-natural, thus tonicizing the dominant. There is something speech-like about these small utterances, all of them set
apart by rests, as if enacting an and-then succession.
Units 3 (bars 7291), 3a (bars 92111), 3b (bars 102111), 3c (bars 112131)
From bar 7 onward, a new idea is presented in imitation, almost like a ricercar.
Voices enter and are absorbed into the ruling texture in an orderly way until we
arrive at a cadence onrather than inthe dominant (bar 14). When first heard,
units 13 (bars 114) seem to function like a slow introduction. We will see later,
however, that their function is somewhat more complex. For one thing, the slow
introduction is repeated when the exposition is heard a second time. Then also,
the material returns elsewhere in the movement, suggesting that what we are hearing here is part of a larger whole. Indeed, Ratner suggests that, if we assemble
all of the slow material (bars 114, 2024, 9395, 98100, 101104, 213220, and
220222), the result is an aria in two-reprise form. It is as if Beethoven cut up a
compact aria and fed the parts into a larger movement to create two interlocking
284
PART II
Analyses
2
2a
2b
3
3a
3b
3c
If we overlook the suffixes, then the essential motion of the units is a 123
successiona linear progression with no sense of return. If we include the suffixes,
we notice that elements of the second and third units are immediately reused. If
we consider the fact, however, that unit 2 is closely based on unit 1, departing only
at its end to make a perfect cadence, then we see that the slow introduction is
essentially binary in its gesture, the first 7 bars based on one idea, the next 7 based
on a different one. The prospects of segmenting these bars differently serve as an
indication of the fluid nature of tonal form and material and might discourage us
from fixing our segments too categorically.
Units 4 (bars 15216), 5 (bars 17218)
A change of tempo from adagio ma non troppo to allegro brings contrasting material. Unit 4 is layered, featuring a virtuosic, descending sixteenth-note figure (first
violin) against a rising-fourth fanfare motive. (I quote the fanfare motive but not
the sixteenth-note figure in example 9.1.) Although the two seem equally functional in this initial appearance, the fanfare will later be endowed with a more
significant thematic function while the sixteenth-note concerto figure will retain
its role as embroidery.
1. Ratner, The Beethoven String Quartets, 217.
2. Here, you see, I cut off the fugue with a pair of scissors. . . . I introduced this short harp phrase, like
two bars of an accompaniment. Then the horns go on with their fugue as if nothing had happened. I
repeat it at regular intervals, here and here again. . . . You can eliminate these harp-solo interruptions,
paste the parts of the fugue together and it will be one whole piece. (Quoted in Edward T. Cone,
Stravinsky: The Progress of a Method, in Perspectives on Schoenberg and Stravinsky, ed. Cone and
Benjamin Boretz [New York: Norton, 1972], 164.)
CHAPTER 9
285
Although units 4 and 5 occur as a pair, they are open; they begin a sequential
pattern that implies continuation. In classic rhetoric, a third occurrence of the fanfare motive would be transformed to bring this particular process to a close and
to initiate another. But there is no third occurrence to speak of; instead, unit 5 is
extended to close on the dominant, thus creating a half-cadence similar to the one
that closed the slow introduction (end of unit 3).
Units 6 (bars 203222), 7 (bars 223242)
It sounds as if we are back to the slow introduction (Beethoven marks Tempo 1
in the score at bar 20), only we are now in V rather than in I. At this point, we might
begin to revise our sense of the emerging form. Oriented toward F as Vunit 7
sits entirely on an F pedalthe 67 pair of units reproduces the material of the
12 pair, but at a different tonal level. The presentation of the 67 pair is however
modified to incorporate stretto in unit 6 and a distinct expansion of register in
unit 7 (first violin).
Units 8 (bars 252271), 9 (bars 272311)
We hear the fanfare motive and its brilliant-style accompaniment as in units 45,
but in keeping with the dominant-key allegiance established in units 6 and 7, units
8 and 9 now sound in V. Like unit 5, unit 9 extends its temporal domain while shifting the tonal orientation of the phrase.
Units 10 (bars 31232), 11 (bars 332371)
The fanfare motif is sung by the bass voice in the tonic, B-flat (unit 10), and is
immediately repeated sequentially up a second (as in units 4 and 5). Then, with a
kind of teleological vengeance, it is extended and modified to culminate decisively
in a cadence at bars 3637. This is the first perfect cadence in the tonic since the
beginning of the allegro. With hindsight, we can see that the fanfare motive from
bars 1516 held within itself the potential for bass motion. This potential is realized in unit 11.
When dealing with complex textures like that of op. 130, where one often has
to reduce textures in order to pinpoint the essential motion, it is well to state the
obvious: discriminatory choices have to be made by the analyst regarding the location of a Hauptstimme. The sense of fulfillment represented in the bass voice in
unit 11 bears witness to such choosing.
Let us now pause to take stock of the activity within the first key area (bars 138,
units 111, figure 9.2). A clear hierarchy emerges in the concentrations of activity
within the thematic fields. The paradigm containing the fanfare motif leads (units
4, 5, 8, 9, 10, 11), followed by the imitative passage (units 3, 3a, 3b, 3c), then the
cadential motif (units 2, 2a, 2b, 7), and finally the inaugurating motif (units 1 and
6). This particular scale of importance comes strictly from the relatively abstract
perspective of our mode of paradigmatic representation; it says nothing about the
temporal extent of individual units nor their rhetorical manner.
286
PART II
Analyses
2
2a
2b
3
3a
3b
3c
4
5
8
9
10
11
Example 9.2. Units 1216c of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
12
13
14
15
16
16a
16b
16c
CHAPTER 9
287
18
19
20
288
PART II
Analyses
22
23
24
CHAPTER 9
289
Example 9.5. Units 2527 of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
25
26
27
etc
Example 9.6. Units 2832 of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
28
29
5-6
5- 6
5- 6
30
(
31
etc
32
etc
290
PART II
Analyses
Development
Beethoven begins the development in the same key in which he ended the exposition, namely, VI. This will be a short development35 bars only, less than half
the length of the preceding exposition.
CHAPTER 9
291
3
3a
3b
3c
4
5
6
7
8
9
10
11
12
13
14
15
16
16a
16b
16c
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
292
PART II
Analyses
Example 9.7. Units 3337 of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
33
34
35
36
37
etc
102
34. Often in Beethoven, the freezing of thematic novelty is a way of marking other
processes for consciousness. In this case, the key scheme sports a rather radical
progression of major thirds: B-flat for the first key area, G-flat for the second and
for the start of the development, and now E[double-flat] or, enharmonically, D. The
progression is radical in the sense that it eschews the hierarchy of tonic-dominant
polarity for a more democratic and symmetrical set of major third relations. Each
of these four units is framed by silence, and although I have pointed to a thematic
parallelism in the pairing of 3334 against 3536, the overall effect of their disposition is a strategic aloofness, a refusal to connect. The question posed in unit 33 is
not answered by unit 34; rather, 34 goes about its own business, proposing its own
idea in hopes of getting a response. Similarly, the question posed by unit 35 is not
answered by unit 36. Of course, tonal continuity between 3334 and 3536 mediates the rejection of these opportunities for dialogue, but it is worth registering the
mosaic-like construction and the rereading of familiar tonal gesturesmoves that
we have encountered in Mahler and will encounter again in Stravinsky.
Unit 37 (bars 10131041)
Not for the first time in this movement, Beethoven extracts a cadential figure from
the end of one unitunit 35and uses it to begin a subsequent one (37). In this
case, the figure in question is repeated several times as if revving up the developmental engine. Our attention is arrested; we hold our breaths for something new.
Units 38 (bars 106109), 38a (bars 110115), 38b (bars116122), 38c
(bars 123129), 38d (bars 1301321)
The main thematic substance of the development is a new, lyrical tune that begins
with an octave exclamation and then winds down in a complementary stepwise
gesture. The tune is first sung by the cello (bars 106ff.) to an accompaniment comprising three previously heard motives: the long-short figure isolated in unit 37,
an incipit of the brilliant-style sixteenth-note figure that originated in unit 4, and
its companion fanfare figure. The ethos in these measures is unhurrieda lyrical
oasis, perhaps. Again, Beethovens freezing of thematic accretion allows the listener
CHAPTER 9
293
Example 9.8. Units 3838d of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
38
38a
38b
38c
38d
to contemplate other elements of the discourse, in this case the key scheme, which
begins with D major in unit 38, G major in units 38a and 38b, and C minor in 38c,
and finally, disrupting the pattern that has aligned key and theme, 38d rereads the
C minor melody within a iiVI progression in B-flat major. The attainment of Bflat signals the beginning of the recapitulation.
What a brief and relatively stable development! And what a contrast it presents
to the active exposition, with its numerous and regular changes in design. We may
be tempted to look for an explanation. And yet, any reason we offer will almost
by definition be a lie. To say, for example, that the brevity of the development
compensates for the extended exposition is to say something singularly unilluminating. No, there are no firm causalities in artistic production (that would only
produce critical casualties), no inevitabilities. There is only the artistic product in
its magnificent contingency. If we, burdened with various neuroses, feel a need to
offer an explanation rooted in causes and effects, no one can of course stop us.
Recapitulation
As always, we are immediately forced into comparative mode as we pass through
the recapitulation. To the extent that there exists a normative recapitulation function, it is to bring the nontonic material that was heard in the second part of the
expositionthe dissonant materialinto the orbit of the home key. By dwelling phenomenally on the tonic, the recapitulation meets the desire engendered by
the developmentwhose normative function is to ensure the absence of the tonic
within its spaceto effect a stylized reconciliation of materials previously associated with the tonic and nontonic spheres.
294
PART II
Analyses
Again, this normative scenario exists on a distant horizon for Beethoven. Let
us recall the main developments so far. The work began with a slow introduction that was repeated in the exposition (not like op. 59, no. 3, where the repeat
of the opening movement excludes the slow introduction). The first key area did
not work organically or consistently with one idea but celebrated heterogeneity
in design; it revealed a surplus of design features, we might say. The transition
to the second key was ambivalent, first pointing to the dominant key, but then
denying it as a destination; eventually, the music simply slid into VI as the alternative tonal premise. The VI was amply confirmed in the rest of the exposition
by changes of design and by cadential articulation. The development began with
fragments of material from early in the movement, but instead of working these
out in the manner of a proper Durchfhrung, it settled into a relaxed hurdygurdy tune that was taken through different keys in a kind of solar arrangement,
leading without dramatic or prolonged retransition to the recapitulation.
It is in the recapitulation that we encounter some of the most far-reaching
changes, not so much in the treatment of material but in the ordering of units
and in the key scheme. The first part seems to continue the development process
by bypassing the tonic, B-flat, and reserving the greater rhetorical strength for
the cadence on the subdominant, E-flat major (bar 145).3 Drawing selectively on
earlier material, the music then corrects itself and heads for a new key. Beethoven
chooses D-flat major for the second key material, thus providing a symmetrical
balance to the situation in the exposition (D-flat and G-flat lie a third on either
side of B-flat, although the lower third is major in contrast to the upper third). But
recapitulating material in D-flat is not enough; being a nontonic degree, it lacks
the resolving capacity of the tonic. Beethoven accordingly replays most of the
second key material in the home key, B-flat, to compensate for the additional dissonances incurred in the first part of the recapitulation. Finally, a coda rounds
things off by returning to and recomposing the thematic material associated with
the opening units of the movement. Some of these recompositions involve largescale thematic and voice-leading connections that are only minimally reflected in
a paradigmatic analysis. The details may be set out as in example 9.9.
Unit 39 (bars 1321341)
This is the same as 4.
Unit 40 (bars 13421363)
This is the same as 5.
Unit 41 (bars 13641391)
Unit 41 is an extension of 40, whose short-short-long motif it develops. This unit
will return in the coda.
3. Daniel Chua provides a vivid description of this and other moments in op. 130 in The Galitzin
Quartets of Beethoven (Princeton, NJ: Princeton University Press, 1995), 201225.
Example 9.9. Units 3943 of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
39
40
41
etc
137
42
43
Example 9.10. Units 4452a of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
44
145
45
146
46
47
147
148
48
49
49a
50
51
52
52a
296
PART II
Analyses
CHAPTER 9
297
54
55
57
58
59
etc
60
61
62
63
etc
298
PART II
Analyses
The following set of equivalences shows that the second phase of the recapitulation, whose purpose is to transform previously dissonant material into consonance
by offering it in the home key, follows the exact order of the expositions material.
Unit 61 (bars 174178) = 22
Unit 62 (bars 1754178) = 23
Unit 63 (bars 1781821) = 24
Example 9.13. Units 6466 of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
64
65
66
etc
5 -6
5-6
68
69
(
70
71
etc
Coda
The last two units of the recapitulation, 70 and 71, both recall and transform
their earlier functions. Unit 70, the equivalent of unit 31, provides the grandest
CHAPTER 9
299
rhetorical elaboration of B-flat, while its successor, 71, the equivalent of 32, adopts
a speech mode in preparing to go somewhere. In the exposition, unit 32 led to a
repeat of the exposition; here, it leads to yet another beginning, this one the beginning of the endthe coda. And it does so by taking up for the last time the material that opened the movement.
The following set of equivalences shows that the coda is essentially a recomposition of the opening of the movement (units 15). The only missing material is
unit 3, the passage of imitative counterpoint, some of whose content has already
appeared in units 1213 and 46.
Example 9.15. Units 7279 of Beethoven, String Quartet in B-flat Major, op. 130,
first movement.
72
73
73a
73b
218
73c
73d
73e
74
218
75
76
77
etc
78
79
300
PART II
Analyses
73
73a
74
73b
73c
75
73d
73e
76
77
78
79
CHAPTER 9
301
carries the process forward, reaching the expressive climax of the coda (bars 222
223) before discharging into a diatonic cadence (227229). In the remaining units,
78 and 79, bass answers trebles fanfare motive call. While unit 78 remains open (the
E-flat in bar 230 is left hanging), unit 79 takes the line to the tonic in bar 233.
The interpenetration of tempi, the play of register, and the juxtaposing of distinctly profiled material collectively endow this coda with a synthesizing function.
For some listeners, therefore, the tensions raised in the course of the movement
are nicely resolved on this final page. Unit 77 emerges as especially important in
this process of synthesis because, despite its allegro tempo, it subtends an adagio
sentiment. Put another way, it gestures toward song while being in speech mode,
the latter evident in the effort to speak words again and again. Song returns in
units 78 and 79.
For other listeners, however, the reification of contrasting materials does not
produce synthesis; rather, it upholds the contrasts. One tiny voice-leading event
at the very end of the work may support the view that things are not necessarily
nicely resolved: the first violins EA tritone in bar 232 is not resolved linearly
in the following bar but absorbed into the dominant-seventh chord. The high
melodic E-flat is a dissonant seventh that does not resolve to an adjacent D; rather,
it is, as it were, taken down two octaves to beat 2 of bar 233 (second violin) from
where it is led to an adjacent D. The movement finishes with a lingering sense of
an unresolved seventh.
These comments about the first movement of op. 130 aimed to identify the
building blocks, note the material affinities among them, and describe their roles
in the movement. The emphasis on building blocks may obscure the larger trajectories that some analysts would prefer to see and hear, but I have argued that
working at this level can be revealing. Perhaps the larger trajectories can take care
of themselves; or perhaps they are our own invention.
Regarding necessary and contingent repetition, the distinction is at once
unavoidable and problematic. Necessary repetition defines structure, perhaps
even ontology; we would not want to leave home without it. But reducing away
the repetitions, transformations, equivalences, and samenesses on account of their
ostensible redundancy amounts to setting aside the peculiar rhetoric that defines a
work; such an act leaves us with an empty shell, with only the potential for content.
Necessary repetition captures a trivial but indispensable quality, namely, the motivating forces that make a work possible; contingent repetition conveys a nontrivial
but dispensable quality, namely, the very lifeblood of a work, the traces left by
acts of composing out. Both are needed. There is no necessary repetition without
its contingencies; at the same time, contingent repetition is grounded in certain
necessities. Paradigmatic analysis, in turn, embodies this paradox by asserting
both the trivial and the nontrivial. By investing in the literalism or iconicity of
repetition, by encouraging the analysts pretense of a nave stance, it embodies the
innocence of a childs view. On the other hand, by displaying patterns that may
be interpreted semiotically as style or strategy, it broaches the mature world of
symbolic and indexical meaning; it leads us to a series of cores and shows us why
music matters.
302
PART II
Analyses
CHAPTER 9
303
that evoke nature, perhaps, and they are subjected to unnatural melodic embellishment. The harmony embodies presence only; it enacts no desire.
Unit 2 (bars 73112)
Stravinsky uses a distinctly constituted and memorably scored sonority as motive.
The chord is repeated, but the speaker appears to be overcome by a stutter.
Although the chord is literally sounded five times, we might group these into a
threefold utterance as a long-short, long-short, long pattern, where the shorts fall
off the longs like immediate echoes. The material gives rise to no real expectations.
It lacks melodic tendency and is therefore quite different from unit 1, which, while
static in terms of an overall progression, nevertheless displayed an incipient linear
tendency in the form of cadence-simulating falling thirds. The fullness of the scoring, including the filling in of the middle registers by brasses, produces a further
contrast between units 2 and 1.
Unit 3 (bars 113130.5)
The close harmony suggests that this might be part of a chorale. We accept the
authenticity of the dissonant harmonies within this particular composers idiolect. The melody, too, is hymn-like, but the relative brevity of the unit suggests that
this might be no more than a fragment. The sense of chorale is only emergent; the
music has a way to go before it can display the full identity of the chorale topos.
Unit 4 (bars 130.5133)
The sonority from unit 2 (the chord) interrupts, as if to say that it had not quite
finished its utterance when the hymn broke in at the end of bar 11. There is only
one sounding of the chord on this occasion.
Unit 5 (bars 14182)
This unit is a near-exact repeat of unit 1. (The first and last bars of unit 1 are suppressed.) The return of the bell motive marks the emerging form for attention. So
far, we have experienced a clearly differentiated succession of ideas, some juxtaposed with no obvious interdependency. Of course, we can begin to connect things
on paper if we so desire. For example, the sequence of melodic pitches in the chorale melody, E-flat, G, A-flat, E-flat (bars 1112), could be extended to the following F (bar 13), so that units 3 and 4 may be heard as connected on one level. And
because units 2 and 4 feature the same chord, they may be heard as one unit, with
unit 3 read as an interpolation. It is as if unit 1 delayed sounding its final member.
The emerging metamusical impulse is, of course, central to Stravinskys aesthetic.
Individual units, of course, might be segmented further. Unit 1, for example,
might be divided in two (bars 13 and 47), the second part being an embellishment and continuation of the first. Or it might be heard in three little segments
comprising a statement (bars 13), a truncated restatement (bars 451), and a
suffix (bars 5272). Although significant, these internal repetitions have not
304
PART II
Analyses
CHAPTER 9
305
4
5
6
7
10
4
5
6
10
306
PART II
Analyses
CHAPTER 9
307
308
PART II
Analyses
CHAPTER 9
309
For some time now, we have been hearing familiar materials in the manner of cinematic flashbacks. Their order reveals no consistency, their individual lengths vary,
but their topical identities are preserved. This is Stravinskys formal strategy, which
I will now render in a paradigmatic chart (figure 9.7).
Figure 9.7 Paradigmatic arrangement of units 135 in Stravinskys Symphonies of
Wind Instruments
1
4
5
6
7
9
13
10
11
12
14
15
16
17
18
20
21
19
22
23
25
24
26
27
28
29
30
31
32
33
35
34
310
PART II
Analyses
The bell motive and chorale serve as anchors; they occur periodically throughout
the movement. Although they are subject to recomposition, they mostly preserve their
essential form. That is, they change, but not in a developmental way. The contrasting
pastorale introduced in unit 16 and repeated as 17 and 18, given a suffix in 19, also
returns (23, 24, 28). While the portion of the movement dominated by these materials provides contrast, it is notable that the bell motive and chorale are not banished
from their domain. The most dramatic reorientation in the form will be registered
as an absence: the bell motive makes its last appearance as unit 27. After this, it cedes
power to the chorale which, in an expanded form, dominates the ending of the work.
Absence and presence in paradigmatic representation are often equally telling.
Unit 36 (bars 217270)
A wild dance deriving from the scherzo material is given full air time here. Its
opening melody has been adumbrated but within different expressive milieus: in
the little wind link of unit 6 and in the jazzy material of unit 30. This unit is longer than most in part because of the consistency of the scherzo expression. There
are, however, reminiscences of other ideas (in bars 258ff., for example, the melody
resembles the two Russian folk melodies). All of this wildness concludes with
shapes of savage simplicity: a five-finger, white-note figure, going up and down in
the bass (bars 267269).
Unit 37 (bars 271274)
The head of the closing chorale appears in a second premonition. (The first was in
bar 201.) On reflectionas distinct from immediately apprehending relations
the chord introduced at the beginning of the work (unit 2) and repeated (without
melodic content, so to speak) is shown to be a foreshadowing of that which begins
the closing chorale (example 9.16). The two recent premonitions (unit 31 and this
one, 37) encourage long-range associations with units 2, 4, 7, and 9. Significant is
the fact that earlier appearances of this form of the chord disappeared after unit
9, although they were figured as potential rather than actualizations.
Example 9.16. Comparison of chords in bars 78 and 271 of Stravinsky, Symphonies of Wind Instruments
bars 7-8
bar 271
Contents
disposed
linearly
[0
9]
[ 0
9]
CHAPTER 9
311
312
PART II
Analyses
37
38
39
Completing our stock taking, we may show the last four units as in figure 9.8.
This shows a linear unfolding broken only by the return of the big chorale at the
end. As far as the overall form, then, Stravinsky goes from juxtaposing blocks of
differentiated materials to suspending an extended chorale as the point of culmination. Some will hear something of a synthesis in the closing chorale while others
will imagine that the discontinuities and stark juxtapositions of earlier moments
have been absorbed in the continuous and serene chorale utterances.
CHAPTER 9
313
314
PART II
Analyses
If the ontology of units is as I have described it, then form in Stravinsky takes
on an additive quality. Succession replaces causal connection, and the distinction
between mere succession and a regulated progression based on a governing
construct is blurred. In Beethovens op. 130, for example, the question raised by
the first 2 bars demands an answer. That answer will be constrained by the 2-bar
lengthwhich it is obliged to accept, or whose rejection it is obliged to justify
either immediately or eventuallyand by the combined melodic and harmonic
progression, which registers incompletion. These causalities are possible because
of a shared language, a common practice. When, however, an aesthetic arises
whose motivating factors include commentary on convention or simple denial of
its principal prescriptions, the props for listening become inferable only from an
individual context. We cede all authority to the composer.
Additive construction and the relative autonomy of units undermine the sense
of music as narrative as distinct from music as order or event succession. Narration in Stravinsky often comes not from the music itself but from its contexts
and associations. The moving dance images in Petrushka, for example, allow us
to infer a plot as well as a sense of narration; stripped of dance, however, the burden of narration falls entirely on the movement of topoi. Alternatively, subsequent
soundings of the chord first heard in bar 7 of Symphonies deliver a sense of return
and progress. But is there a subject that enacts the narrative? If there is a subject
in Stravinsky, it is a split one. The Hauptstimme is often plural. Even in moments
where the texture features a clear melody and its accompaniment (as in the flute
and clarinet exchanges beginning in bar 71 of Symphonies), that which ostensibly
accompanies has strong claims to perceptual priority. Thus, a treble-bass polarity
is placed under the possibility of erasurepresent but at the same time undermined. The tendency in Stravinsky is toward an equalization of parts. (A potential
historical point emanating from this distinction concerns the difference between
Stravinsky and Schoenberg. Stravinsky does not merely assertverbally or precompositionallythis tendency toward overturning the conventional treble-bass
hierarchy. Schoenberg the theoretician, on the other hand, had much to say about
some of these elements of radicalism although the practical evidence in his scores
often leaves the polarity intact. Stravinsky, in this reading, was a far more radical
composer.)
method, draws an analogy between the composers technique of stratification and Bachs polyphonic melody, where strands of melody are begun, suspended, and eventually brought to a satisfactory conclusion. I do not doubt that score analyses can produce such connections in Stravinsky,
but there is a critical difference between Bachs polyphonic melody (or, for that matter, Beethovens)
and Stravinsky. The former operates within specified conventional constraints, so that the syntactic
governors are well understood or readily inferred from their contexts. Stravinskys constraints in
Symphonies (and elsewhere), by contrast, cannot be inferred or predicted; they must simply be
accepted. Indeed, part of the creativity at work in Stravinsky stems from his taking the license to
complete or not to complete something that is open (in conventional language). In Beethoven,
there is a contract between composer and listener; in Stravinsky, there is no a priori contract. Alternatively, we might say that the Stravinskian contract consists precisely in denying the existence of
an a priori contract.
CHAPTER 9
315
On the matter of representation, we might say that paradigmatic representation is more faithful to Stravinskys material than it is to Beethovens. The autonomy asserted by discrete numbers (1, 2, 3, etc.) captures the presumed autonomy
of Stravinskys units more meaningfully than it does Beethovens. For example, if
we recall the disposition of units in the opening of the two works studied in this
chapter as 1, 2, 2a, 2b, 3, 3a, 3b, 3c for op. 130 and as 1, 2, 3, 4 for bars 17 of Symphonies, we would say that missing from the representation of Beethoven are the linear
connections that would convey the dependency of adjacent units, including the
2
3
4
5 that holds the progression as a whole together.
stepwise melodic line 1
Units 2a and 2b in the quartet are in a literal sense caused by unit 2, or at least
made possible by it. In the Stravinsky, by contrast, and local melodic connections
notwithstanding, the sense of causation and dependency is less pertinent, so that
the representation in numbers fairly conveys what is going on.
These presumed oppositions between Beethoven and Stravinsky are presented
starkly in order to dramatize difference and encourage debate. It would be foolish to claim, however, that all of Beethoven can be reduced to the organic while
all of Stravinsky is inorganic, or that Stravinsky is always easier to segment than
Beethoven, or that hierarchic structuring is never elusive in Beethoven. Neither
style system is, in the end, reducible to such simple categories. There is discontinuity aplenty in Beethoven, material is sometimes organized in blocks, and certain
units succeed each other with a logic that is not always obviously causal. In Stravinsky, on the other hand, a local organicism is often at work, producing diminutions
like passing notes and especially neighbor-notes (or clusters of neighboring notes)
as well as conventional cadential gestures that often attract an ironic reading. Adjacent units may depend on each other, too. So the truth in the differences between
the two composers lies in between. For a more nuanced understanding, we will
need to embrace the interstitial wholeheartedly.
Music as discourse is probably indifferent to aesthetic choices. The degree of
meaningfulness may vary from context to context, and the dramatization of that
discourse may assume different forms, but the very possibility of reading a discourse from the ensemble of happenings that is designated as a work always exists.
An aesthetic based on the intentional violation of convention, for example, is just
as amenable to a discourse reading as one marked by the creative enactment of
such convention. Perhaps, in the end, the two approaches are indistinguishable.
Perhaps, the idea that Stravinsky stands as a (poetic) repetition of Beethoven is
not as outlandish as it might have seemed at first. Our task as analysts, in any
case, is not primarily to propagate such opinions (although historical understanding through music analysis remains an attractive option) but to make possible the
kinds of technical exploration that enable a reconstruction of the parameters that
animate each discourse.
Epilogue
At the close of these adventures in music analysis, it is tempting to try and put
everything together in a Procrustean bed, sum up the project neatly as if there
were no rough edges, no deviant parameters, no remainders, no imponderables.
Yet music, as we have seen, is an unwieldy animal; it signifies in divergent and
complex ways. Its meanings are constructed from a wide range of premises and
perspectives. The number and variety of technical approaches to analysis together
with the diversity of ideological leanings should discourage us from attempting
a quick, facile, or premature synthesis. Nevertheless, to stop without concluding,
even when the point of the conclusion is to restate the inconclusiveness of the
project, would be to display bad manners. So, let me briefly rehearse what I set out
to do in this book and why, and then mention some of the implications of what I
have done.
I set out to provide insight into how (Romantic) music works as discoursethe
nature of the material and the kinds of strategies available for shaping it. The aim
was to provide performers, listeners, and analysts with a pretext for playing in and
with (the elements of) musical compositions in order to deepen their appreciation
and understanding. The institutional umbrella for this activity is musical analysis,
and it is under this rubric that we conventionally place the collective actions of getting inside a musical composition in order to identify its elements and to observe
the dynamics of their interactions. There is, of course, nothing new about the curiosity that such actions betray; every analyst from Schenker and Tovey to Dahlhaus,
Ratner, and Adorno has been motivated by a desire to figure things out.
But which paths to understanding do we choose? How do we turn concept into
practice? What concrete steps allow us to establish or at least postulate an ontology for a given composition? What is an appropriate methodology for analysis?
It is here that we encounter striking differences in belief and approach. In this
book, I proceeded from the assumption that (Romantic) music works as a kind of
language that can be spoken (competently or otherwise), that music making is a
meaningful activity (for the participants first and foremost, but also for observers),
that the art of musical composition or poiesis is akin to discoursing in sound, and
317
318
Music as Discourse
that it is the purpose of analysis to convey aspects of that discoursing by drawing on a range of appropriate techniques. Since the ability to create presupposes
a prior ability to speak the relevant language, knowledge of the basic conventions
of organizing sound (as music) is indispensable. To ignore conventions, to turn a
blind eye to the grammatical constraints and stylistic opportunities available to an
individual composer, is to overlook the very ecology that made the work possible.
Indeed, a poor grasp of conventions can lead either to an underappreciation of
individual creativity or to an overvaluing of a particular achievement.
Reconstructing this ecology can be a formidable challenge, however, since it
is liable to involve us in long and arduous investigation (some of it biographical,
some of it social, some of it historical, and some of it musical). Attempting such
reconstructions here would probably have doubled the size of this project without
bringing proportional rewards in the form of conceptual clarity. So, I chose instead
to draw on a number of existing studies.
In the first part of the book, I began by rehearsing some of the ways in which
music is or is not like language (chapter 1). My 10 propositions on this topic were
designed with an interrogative and provocative purpose, not as laws orworse
commandments, but as propositions or hypotheses. Each of them is, of course, subject to further interrogation; indeed, writings by, among others, Nattiez, Adorno,
Schenker, Ratner, and Janklvitch speak directly or indirectly to issues raised by
thinking of music as language and to exploring the nature of musical meaning.
In chapters 2 and 3, I provided six criteria for capturing some of the salient
aspects of Romantic music. Again, my chosen criteria were designed to capture
certain conventions as prerequisites for insightful analysisthe functioning of
topoi or subjects of musical discourse; beginnings, middles, and endings; the use of
high points; periodicity, discontinuity, and parentheses; the exploration of registers
or modes of utterance, including speech mode, dance mode, and song mode; and
the cultivation of a narrative thread. Without claiming that each criterion is pertinent to every musical situation, it is nevertheless hard to imagine any listening to
Romantic music that is not shaped on some level by one or more of these factors.
Chapter 4 took us further into the heart of the musical language, demanding what we might as well call an insiders perspective. Musical insiders are those
individuals and communities who compose and/or perform music; they may also
include others whose perspectives as listeners and analysts are shaped fundamentally by these experiences. While certain features of Romantic musiclike topics or even high pointscan, as it were, be identified by outsiders, speculation
about the harmonic, contrapuntal, or phrase-structural foundations of a given
composition often have to come from the inside, from direct engagement with
the musical code itself. It is precisely this kind of engagement that has produced
some of the most sophisticated and influential views of musical structure (such as
that of Schenker). The generative approach to understanding which I adopted in
the fourth chapter invited participation in the dance of meaning production not
through detached observation and the subsequent spinning of (verbal) tales but
by direct participation through acts of hypothesizing compositional origins. I suggested that this kind of activity is akin to speaking music as a language. Indeed,
although logically obvious, the procedure of postulating a simplified or norma-
Epilogue
319
tive construct in order to set into relief a composers choices can be of profound
significance. The normative and conventional are reconstructed as background,
as foil, as ecology, as a nexus of possibility; the composers choices emerge against
this background of possibility, highlighting paths not taken and sounding a series
of might-have-beens.
My final theoretical task (chapter 5) was to isolate what many believe to be
the most indigenous feature of music, namely, its use of repetition, and to see
what insights flow from focusing on it. Tonal expression is unimaginable without
repetition, but repetition takes many forms and encompasses different levels of
structure. In chapter 5, I followed the lead of semiological analysts (Ruwet, Nattiez,
and Lidov, among others) in exploring a range of repetitions from simple pitch
retention through thematic transformation to harmonic reinterpretation. Rather
than place the emphasis on abstract method, however, the analyses in this chapter
were offered under specific rubrics stemming from intuited qualities that could
be made explicit in paradigmatic analyses: logical form versus chronological in
Chopin, discontinuity and additive construction in Mozart, and developing variation in Brahms. Janklvitchs claim that musics regime par excellence is one of
continuous mutation manifesting in variation and metamorphosis reinforces
the central justification for the paradigmatic method.1
With the theoretical part of my project completed in chapter 5, I turned to
a number of close readings of works by Liszt, Brahms, Mahler, Beethoven, and
Stravinsky (chapters 69). Although each analysis was framed as a semiological
study of units and their patterns of succession, our concerns, adumbrated in earlier analyses, were broader, embracing issues of formal strategy and narrative. The
case studies were designed not as systematic applications of method but as flexible explorations of musical articulation and the kinds of (formal and associative) meanings that are possible to discover. Analysis must always make discovery
possible; if it seems closed, if it provides answers rather than further questions, it
betrays its most potent attribute.
The implications of the books analyses may be drawn according to individual
interest. I have already hinted at a number of them in the course of specific analyses of such features as beginnings, middles, and endings in Mendelssohn; discontinuity in Stravinsky; logical form in Chopin; and tonal modeling in Beethoven.
Exploring modes of utterance by contrasting speech mode with song mode, for
example, may stimulate further speculation about the linguistic nature of music
while making possible a deeper exploration of its temporal dimensions (including
periodicity). A focus on closure likewise encourages further speculation about the
dynamics that shape individual compositions into meaningful discourses.
Attention to high points similarly conveys immediately perceptible aspects of
a work, while narrative trajectories may be experienced by isolating a migratory
voice from beginning to enda voice that may sometimes (choose to) remain
silent according to the rhetorical needs of the moment. Again, the fact that these
features were discussed in isolation does not mean that they are experienced that
320
Music as Discourse
way. On the contrary, they are all connected if not in actuality then very definitely
potentially. For example, high points are typical harbingers of closure; in other
contexts, they may signify specific moments within the narrative trajectory. The
musical experience tends, in principle, toward holism; the analytical procedure,
on the contrary, entails a (provisional) dismantling of that wholeit tends toward
isolation. Ideally, an analysis should unveil the conditions of possibility for a musical experience. Although it may serve to rationalize aspects of that experience,
analysis can never accurately report the full dimensions of that experience. Recognizing the intended partiality of analytical application may thus help us to evaluate
analytical outcomes more reasonably than expecting analysts to work miracles.
In order to begin to capture the work of the imagination as expressed in
Romantic music, we need to get close to it. We need to enter into those real as
well as imaginative spaces and temporalities that allow us to inspect a works elements at close quarters. We enter these spaces, however, not with the (mistaken)
belief that the object of analysis is a tabula rasa, but with the knowledge that it is
freighted with the routines, mannerisms, and meanings of a spoken (musical)
language. We enter these places armed with a feeling for precedent and possibility
and free of the delusion that we are about to recount the way in which a specific
artwork actually came into being. The power of analysis lies precisely in this openended pursuit of understanding through empathy, speculation, and play.
This view of analysis is, I believe, akin to what Adorno had in mind when he
enjoined us to pursue the truth content (Wahrheitsgehalt) of musical composition. It resonates with Schenkers search for (the truth of) the composers vision in
the unfolding of the Urlinie and in the various transformations of the contrapuntal
shapes of strict counterpoint into free composition. It is implicit in the qualities that
Ratner sought to capture with his notion of topics or subjects to be incorporated
into a musical discourse; topics lead us on a path to the discovery of truth in musical style, a discovery that may in turn illuminate the historical or even sociological aspects of a work. And it shares the idealization that led Nattiez to arrange his
Molino-inspired tripartition (consisting of a poeitic pole, a neutral level, and an
esthesic pole) into a mechanism for uniting the (indispensable and complementary)
perspectives of listeners, the production processes, and the work itself as an unmediated trace. This view of analysis may evenand strangely at firstbe affiliated
with Janklvitchs relentless protestations of the suitability of various categories
for music: form, expressivity, development, communicability, and translatability. Of
course, the material expression of this unifying ideologyif that is what it iswill
sooner or later produce differences; these theories cannot ultimately be collapsed
into one another. Their shared motivating impulse remains the same, however:
curiosity about the inner workings of our art. In the end, it is not the analytical
trace that matters most; the trace, in any case, yields too much to the imperatives of
capital accumulation. If, as Anthony Pople imagines, meaning is a journey rather
than a destination,2 then edification will come from doing, from undertaking the
journey. The materiality of analytical proceeding serves as its own reward.
2. Anthony Pople, Preface, in Theory, Analysis and Meaning in Music, ed. Pople (Cambridge: Cambridge University Press, 1994), xi.
B I B L IO G R A P H Y
Abbate, Carolyn. Unsung Voices: Opera and Musical Narrative in the Nineteenth Century.
Princeton, NJ: Princeton University Press, 1991.
. MusicDrastic or Gnostic? Critical Inquiry 30 (2004): 505536.
Adorno, Theodor. Mahler: A Musical Physiognomy, trans. Edmund Jephcott. Chicago: University of Chicago Press, 1992.
. Music and Language: A Fragment, in Quasi una Fantasia: Essays on Modern Music,
trans. Rodney Livingstone. London: Verso, 1992, 16.
. Beethoven: The Philosophy of Music, trans. Edmund Jephcott, ed. Rolf Tiedemann.
Stanford, CA: Stanford University Press, 1998.
. Essays on Music, ed. Richard Leppert. Berkeley: University of California Press, 2002.
. Schubert (1928), trans. Jonathan Dunsby and Beate Perrey. 19th-Century Music 24
(2005): 314.
Agawu, Kofi. Structural Highpoints in Schumanns Dichterliebe. Music Analysis 3 (1984):
159180.
. Tonal Strategy in the First Movement of Mahlers Tenth Symphony. 19th-Century
Music 9 (1986): 222233.
. Concepts of Closure and Chopins op. 28. Music Theory Spectrum 9 (1987): 117.
. Stravinskys Mass and Stravinsky Analysis. Music Theory Spectrum 11 (1989): 139163.
. Playing with Signs: A Semiotic Interpretation of Classic Music. Princeton, NJ: Princeton University Press, 1991.
..
. Does Music Theory Need Musicology? Current Musicology 53 (1993): 8998.
. Prolonged Counterpoint in Mahler, in Mahler Studies, ed. Stephen Hefling. Cambridge:
Cambridge University Press, 1997, 217247.
. The Challenge of Semiotics, in Rethinking Music, ed. Nicholas Cook and Mark
Everist. Oxford: Oxford University Press, 1999, 138160.
. Representing African Music: Postcolonial Notes, Queries, Positions. New York:
Routledge, 2003.
. How We Got Out of Analysis and How to Get Back In Again. Music Analysis 23
(2004): 267286.
Agmon, Eytan. The Bridges That Never Were: Schenker on the Contrapuntal Origin of the
Triad and the Seventh Chord. Music Theory Online 3 (1997).
Albrechtsberger, Johann Georg. Grndliche Anweisung zur Composition. Leipzig: Johann
Immanuel Breitkopf, 1790.
321
322
Bibliography
Allanbrook, Wye J. Rhythmic Gesture in Mozart: Le nozze di Figaro and Don Giovanni.
Chicago: University of Chicago Press, 1983.
..
Stuyvesant, NY: Pendragon, 1992, 125171.
. K331, First Movement: Once More, with Feeling, in Communication in EighteenthCentury Music, ed. Danuta Mirka and Kofi Agawu. Cambridge: Cambridge University
Press, 2008.
Almen, Byron, and Edward Pearsall, eds. Approaches to Meaning in Music. Bloomington:
Indiana University Press, 2006.
Andriessen, Louis, and Elmer Schnberger. Apollonian Clockwork: On Stravinsky, trans. Jeff
Hamburg. Oxford: Oxford University Press, 1989.
Ayrey, Craig. Review of Playing with Signs by K. Agawu. Times Higher Education Supplement 3 (May 1991): 7.
. Debussys Significant Connections: Metaphor and Metonymy in Analytical Method,
in Theory, Analysis and Meaning in Music, ed. Anthony Pople. Cambridge: Cambridge
University Press, 1994, 127151.
. Universe of Particulars: Subotnik, Deconstruction, and Chopin. Music Analysis 17
(1998): 339381.
Barry, Barbara. In Beethovens Clockshop: Discontinuity in the Opus 18 Quartets. Musical
Quarterly 88 (2005): 320337.
Barthes, Roland. Elements of Semiology, trans. Annette Lavers and Colin Smith. New York:
Hill and Wang, 1967.
. Mythologies, trans. Annette Lavers. New York: Hill and Wang, 1972.
Becker, Judith, and Alton Becker. A Grammar of the Musical Genre Srepegan. Journal of
Music Theory 24 (1979): 143.
Bekker, Paul. Beethoven. Berlin and Leipzig: Schuster and Loeffler, 1911.
Bellerman, Heinrich. Der Contrapunct; Oder Anleitung zur Stimmfhrung in der musikalischen Composition. Berlin: Julius Springer, 1862.
Bent, Ian D., ed. Music Analysis in the Nineteenth Century. 2 vols. Cambridge: Cambridge
University Press, 1994.
Bent, Ian, and Anthony Pople. Analysis. The New Grove Dictionary of Music and Musicians,
2nd ed. London: Macmillan, 2001.
Benveniste, mile. The Semiology of Language, in Semiotics: An Introductory Reader, ed.
Robert E. Innis. London: Hutchinson, 1986, 228246.
Berger, Karol. The Form of Chopins Ballade, op. 23. 19th-Century Music 20 (1996): 4671.
Bernhard, Christoph. Ausfhrlicher Bericht vom Gebrauche der Con- und Dissonantien,
Tractatus compositionis augmentatus in Die Kompositionslehre Heinrich Schtzens in der
Fassung seines Schlers Christoh Bernhard, ed. J. Mller-Blattau, 2d ed. Kassel: Brenreiter, 1963. English translation by W. Hilse, The Treatises of Christoph Bernhard. Music
Forum 3 (1973): 1196.
Berry, David Carson. A Topical Guide to Schenkerian Literature: An Annotated Bibliography
with Indices. Hillsdale, NY: Pendragon, 2004.
Bir, Dniel Pter. Plotting the Instrument: On the Changing Role of Timbre in Mahlers
Ninth Symphony and Weberns op. 21. Unpublished paper.
Boils, Charles L. Tepehua Thought-Song: A Case of Semantic Signalling. Ethnomusicology
11 (1967): 267292.
Bonds, Mark Evan. Wordless Discourse: Musical Form and the Metaphor of the Oration.
Cambridge, MA: Harvard University Press, 1991.
Bibliography
323
324
Bibliography
Bibliography
325
Garda, Michela. Lestetica musicale del Novecento: Tendenze e problemi. Rome: Carocci,
2007.
Grabcz, Mrta. Morphologie des oeuvres pour piano de Liszt: Influence du programme sur
lvolution des formes instrumentales, 2nd ed. Paris: Kim, 1996.
. Semiological Terminology in Musical Analysis, in Musical Semiotics in Growth, ed.
Eero Tarasti. Bloomington: Indiana University Press, 1996, 195218.
. Topos et dramaturgie: Analyse des signifis et de la strategie dans deux movements
symphoniques de B. Bartok. Degrs 109110 (2002): j1j18.
. Stylistic Evolution in Mozarts Symphonic Slow Movements: The Discursive-Passionate Schema. Intgral 20 (2006): 105129.
Hanslick, Eduard. Vom Musikalisch-Schnen [On the Musically Beautiful], trans. Martin
Cooper, excerpted in Music in European Thought 18511912, ed. Bojan Bujc. Cambridge: Cambridge University Press, 1988, 1239.
Hasty, Christopher. Segmentation and Process in Post-Tonal Music. Music Theory Spectrum 3 (1981): 5473.
. Meter as Rhythm. New York: Oxford University Press, 1997.
Hatten, Robert. Musical Meaning in Beethoven: Markedness, Correlation, and Interpretation.
Bloomington: Indiana University Press, 1994.
. Interpreting Musical Gestures, Topics, and Tropes: Mozart, Beethoven, Schubert. Bloomington: Indiana University Press, 2004.
Hefling, Stephen E. The Ninth Symphony, in The Mahler Companion, ed. Donald Mitchell
and Andrew Nicholson. Oxford: Oxford University Press, 2002, 467490.
Henrotte, Gayle A. Music as Language: A Semiotic Paradigm? in Semiotics 1984, ed. John
Deely. Lanham, MD: University Press of America, 1985, 163170.
Henschel, George. Personal Recollections of Johannes Brahms: Some of His Letters to and
Pages from a Journal Kept by George Henschel. New York: AMS, 1978.
Hepokoski, James, and Warren Darcy. Elements of Sonata Theory: Norms, Types, and Deformations in the Late-Eighteenth-Century Sonata. Oxford: Oxford University Press, 2006.
Hoeckner, Berthold. Programming the Absolute: Nineteenth-Century German Music and the
Hermeneutics of the Moment. Princeton, NJ: Princeton University Press, 1978.
Horton, Julian. Review of Bruckner Studies, ed. Paul Hawkshaw and Timothy Jackson.
Music Analysis 18 (1999): 155170.
. Bruckners Symphonies and Sonata Deformation Theory. Journal of the Society for
Musicology in Ireland 1 (20052006): 517.
Hughes, David W. Deep Structure and Surface Structure in Javanese Music: A Grammar of
Gendhing Lampah. Ethnomusicology 32(1) (1988): 2374.
Huron, David. Review of Highpoints: A Study of Melodic Peaks by Zohar Eitan. Music Perception 16(2) (1999): 257264.
Ivanovitch, Roman. Mozart and the Environment of Variation. Ph.D. diss., Yale University,
2004.
Jackson, Roland. Leitmotive and Form in the Tristan Prelude. Music Review 36 (1975): 4253.
Jakobson, Roman. Language in Relation to Other Communication Systems, in Jakobson,
Selected Writings, vol. 2. The Hague: Mouton, 1971, 697708.
Janklvitch, Vladimir. Music and the Ineffable [La Musique et lIneffable], trans. Carolyn
Abbate. Princeton, NJ: Princeton University Press, 2003.
Johns, Keith T. The Symphonic Poems of Franz Liszt, rev. ed. Stuyvesant, NY: Pendragon,
1996.
Jonas, Oswald. Introduction to the Theory of Heinrich Schenker: The Nature of the Musical
Work of Art, trans. and ed. John Rothgeb. New York: Longman, 1982 (orig. 1934).
326
Bibliography
Kaplan, Richard. Sonata Form in the Orchestral Works of Liszt: The Revolutionary Reconsidered. 19th-Century Music 8 (1984): 142152.
Katz, Adele. Challenge to Musical Tradition: A New Concept of Tonality. New York: Knopf,
1945.
Keiler, Alan. The Syntax of Prolongation: Part 1, in Theory Only 3 (1977): 327.
. Bernsteins The Unanswered Question and the Problem of Musical Competence.
Musical Quarterly 64 (1978): 195222.
Klein, Michael L. Intertextuality in Western Art Music. Bloomington: Indiana University
Press, 2005.
Kleinertz, Rainer..
Koch, Heinrich Christoph. Versuch einer Anleitung zur Composition, vols. 2 and 3. Leipzig:
Bhme, 1787 and 1793.
Kramer, Jonathan D. The Time of Music: New Meanings, New Temporalities, New Listening
Strategies. New York: Schirmer, 1988.
Kramer, Lawrence. Music and Poetry: The Nineteenth Century and After. Berkeley: University of California Press, 1984.
. Music as Cultural Practice. Berkeley: University of California Press, 1990.
Krebs, Harald. The Unifying Function of Neighboring Motion in Stravinskys Sacre du
Printemps. Indiana Theory Review 8 (1987): 313.
Krumhansl, Carol. Topic in Music: An Empirical Study of Memorability, Openness, and
Emotion in Mozarts String Quintet in C Major and Beethovens String Quartet in A
Minor. Music Perception 16 (1998): 119132.
La Grange, Henry-Louis de. Gustav Mahler: A New Life Cut Short (1907-1911). Oxford:
Oxford University Press, 2008.
Larson, Steve. A Tonal Model of an Atonal Piece: Schoenbergs opus 15, number 2. Perspectives of New Music 25 (1987): 418433.
Leichtentritt, Hugo. Musical Form. Cambridge, MA: Harvard University Press, 1951 (orig.
1911).
Lendvai, Erno. Bela Bartk: An Analysis of His Music. London: Kahn & Averill, 1971.
Lerdahl, Fred, and Ray Jackendoff. A Generative Theory of Tonal Music. Cambridge, MA:
MIT Press, 1983.
Lester, Joel. J. S. Bach Teaches Us How to Compose: Four Pattern Prelude of the WellTempered Clavier. College Music Symposium 38 (1998): 3346.
Lewin, David. Musical Form and Transformation: Four Analytic Essays. New Haven, CT:
Yale University Press, 1993.
. Music Theory, Phenomenology, and Modes of Perception, in Lewin, Studies in
Music with Text. Oxford: Oxford University Press, 2006, 53108.
Lewis, Christopher Orlo. Tonal Coherence in Mahlers Ninth Symphony. Ann Arbor: UMI
Press, 1984.
Lidov, David. On Musical Phrase. Montreal: Faculty of Music, University of Montreal,
1975.
. Nattiezs Semiotics of Music. Canadian Journal of Research in Semiotics 5 (1977):
1354.
. Mind and Body in Music. Semiotica 66 (1987): 6997.
. The Lamento di Tristano, in Models of Music Analysis: Music before 1600, ed. Mark
Everist. Oxford: Blackwell, 1992, 6692.
. Elements of Semiotics. New York: St. Martins Press, 1999.
. Is Language a Music? Writings on Musical Form and Signification. Bloomington: Indiana University Press, 2005.
Bibliography
327
Lowe, Melanie. Pleasure and Meaning in the Classical Symphony. Bloomington: Indiana
University Press, 2007.
Marx, A. B. Die Lehre von der musikalischen Komposition, praktisch-theoretisch. Leipzig:
Breitkopf & Hrtel, 18371847.
Mattheson, Johann. Der vollkommene Capellmeister, trans. Ernest Harriss. Ann Arbor, MI:
UMI Research Press, 1981 (orig. 1739).
Maus, Fred Everett. Narratology, Narrativity, in The New Grove Dictionary of Music and
Musicians, 2nd ed. London: Macmillan, 2001.
McClary, Susan. Conventional Wisdom: The Content of Musical Form. Berkeley: University
of California Press, 2001.
McCreless, Patrick. Syntagmatics and Paradigmatics: Some Implications for the Analysis
of Chromaticism in Tonal Music. Music Theory Spectrum 13 (1991): 147178.
. Music and Rhetoric, in The Cambridge History of Western Music Theory, ed. Thomas
Christensen. Cambridge: Cambridge University Press, 2002, 847879.
. Anatomy of a Gesture: From Davidovsky to Chopin and Back, in Approaches to
Meaning in Music, ed. Byron Almen and Edward Pearsall. Bloomington: Indiana University Press, 2006, 1140.
McDonald, Matthew. Silent Narration: Elements of Narrative in Ivess The Unanswered
Question. 19th-Century Music 27 (2004): 263286.
McKay, Nicholas Peter. On Topics Today. Zeitschrift der Gesellschaft fr Musiktheorie 4 (2007).. Accessed August 12, 2008.
Metzer, David. Quotation and Cultural Meaning in Twentieth-Century Music. Cambridge:
Cambridge University Press, 2003.
Meyer, Leonard B. Explaining Music: Essays and Explorations. Chicago: University of
Chicago Press, 1973.
. Exploiting Limits: Creation, Archetypes and Style Change. Daedalus (1980): 177205.
. Style and Music: Theory, History, and Ideology. Philadelphia: University of Pennsylvania Press, 1989.
Micznik, Vera. Music and Narrative Revisited: Degrees of Narrativity in Mahler. Journal
of the Royal Musical Association 126 (2001): 193249.
Mitchell, Donald. Gustav Mahler, vol. 3: Songs and Symphonies of Life and Death. London:
Faber and Faber, 1985.
Molino, Jean. Musical Fact and the Semiology of Music, trans. J. A. Underwood. Music
Analysis 9 (1990): 104156.
Monelle, Raymond. Linguistics and Semiotics in Music. Chur, Switzerland: Harwood, 1992.
. The Sense of Music: Semiotic Essays. Princeton, NJ: Princeton University Press, 2000.
. The Musical Topic: Hunt, Military and Pastoral. Bloomington: Indiana University
Press, 2006.
Monson, Ingrid. Saying Something: Jazz Improvisation and Interaction. Chicago: University
of Chicago Press, 1996.
Morgan, Robert P. Schenker and the Theoretical Tradition. College Music Symposium 18
(1978): 7296.
. Coda as Culmination: The First Movement of the Eroica Symphony, in Music
Theory and the Exploration of the Past, ed. Christopher Hatch and David W. Bernstein.
Chicago: University of Chicago Press, 1993, 357376.
Morgan, Robert P. Circular Form in the Tristan Prelude. Journal of the American Musicological Society 53 (2000): 69103.
. The Concept of Unity and Musical Analysis. Music Analysis 22 (2003): 750.
Muns, George. Climax in Music. Ph.D. diss., University of North Carolina, 1955.
Narmour, Eugene. The Analysis and Cognition of Basic Melodic Structures: The ImplicationRealization Model. Chicago: University of Chicago Press, 1990.
328
Bibliography
Nattiez, Jean-Jacques. Varses Density 21.5: A Study in Semiological Analysis, trans. Anna
Barry. Music Analysis 1 (1982): 243340.
. Music and Discourse: Toward a Semiology of Music, trans. Carolyn Abbate. Princeton,
NJ: Princeton University Press, 1990.
Neubauer, John. The Emancipation of Music from Language: Departure from Mimesis in
Eighteenth-Century Aesthetics. New Haven, CT: Yale University Press, 1986.
Newcomb, Anthony. Schumann and Late Eighteenth-Century Narrative Strategies. 19thCentury Music 11 (1987): 164174.
Notley, Margaret. Late-Nineteenth-Century Chamber Music and the Cult of the Classical
Adagio. 19th-Century Music 23 (1999): 3361.
Oster, Ernst. The Dramatic Character of the Egmont Overture, in Aspects of Schenkerian
Theory, ed. David Beach. New Haven, CT: Yale University Press, 1983, 209222.
Oxford English Dictionary, 3rd ed. Oxford: Oxford University Press, 2007.
Perrey, Beate Julia. Schumanns Dichterliebe and Early Romantic Poetics: Fragmentation of
Desire. Cambridge: Cambridge University Press, 2003.
Pople, Anthony, ed. Theory, Analysis and Meaning in Music. Cambridge: Cambridge University Press, 1994.
Powers, Harold. Language Models and Music Analysis. Ethnomusicology 24 (1980): 160.
. Reading Mozarts Music: Text and Topic, Sense and Syntax. Current Musicology 57
(1995): 544.
Puffett, Derrick. Bruckners Way: The Adagio of the Ninth Symphony. Music Analysis 18
(1999): 5100.
Ratner, Leonard G. Music: The Listeners Art, 2nd ed. New York: McGraw-Hill, 1966.
. Classic Music: Expression, Form, and Style. New York: Schirmer, 1980.
. Topical Content in Mozarts Keyboard Sonatas. Early Music 19 (1991): 615619.
. Romantic Music: Sound and Syntax. New York: Schirmer, 1992.
. The Beethoven String Quartets: Compositional Strategies and Rhetoric. Stanford, CA:
Stanford Bookstore, 1995.
Ratz, Erwin. Einfhrung in die musikalische Formenlehre, 3rd ed. Vienna: Universal, 1973.
Reed, John. The Schubert Song Companion. Manchester: Manchester University Press, 1985.
Rehding, Alex. Liszts Musical Monuments. 19th-Century Music 26 (2002): 5272.
Rti, Rudolph. The Thematic Process in Music. New York: Macmillan, 1951.
Reynolds, Christopher. Motives for Allusion: Context and Content in Nineteenth-Century
Music. Cambridge, MA: Harvard University Press, 2003.
Richards, Paul. The Emotions at War: Atrocity as Piacular Rite in Sierra Leone, in Public
Emotions, ed. Perri 6, Susannah Radstone, Corrine Squire, and Amal Treacher. London:
Palgrave Macmillan, 2006, 6284.
Richter, Ernst Friedrich. Lehrbuch des einfachen und doppelten Kontrapunkts. Leipzig:
Breitkopf & Hrtel, 1872.
Riemann, Hugo. Vereinfachter Harmonielehre; oder, Die Lehre von den tonalen Funktionen
der Akkorde. London: Augener, 1895.
Riezler, Walter. Beethoven. Translated by G. D. H. Pidcock. New York: Vienna House, 1938.
Rosand, Ellen. The Descending Tetrachord: An Emblem of Lament. Musical Quarterly 65
(1979): 346359.
Rosen, Charles. The Classical Style: Haydn, Mozart, Beethoven. New York: Norton, 1972.
Rothstein, William. Phrase Rhythm in Tonal Music. New York: Schirmer, 1989.
. Transformations of Cadential Formulae in Music by Corelli and His Successors,
in Studies from the Third International Schenker Symposium, ed. Allen Cadwallader.
Hildersheim, Germany: Olms, 2006, 245278.
Bibliography
329
Rowell, Lewis. The Creation of Audible Time, in The Study of Time, vol. 4, ed. J. T. Fraser,
N. Lawrence, and D. Park. New York: Springer, 1981, 198210.
Ruwet, Nicolas. Thorie et mthodes dans les etudes musicales: Quelques remarques rtrospectives et prliminaires. Music en jeu 17 (1975): 1136.
. Methods of Analysis in Musicology, trans. Mark Everist. Music Analysis 6 (1987):
1136.
Salzer, Felix. Structural Hearing: Tonal Coherence in Music. 2 vols. New York: Dover, 1952.
Salzer, Felix, and Carl Schachter. Counterpoint in Composition: The Study of Voice Leading.
New York: Columbia University Press, 1989.
Samson, Jim. Music in Transition: A Study of Tonal Expansion and Atonality 19001920.
London: Dent, 1977.
. Extended Forms: Ballades, Scherzos and Fantasies, in The Cambridge Companion to
Chopin, ed. Samson. Cambridge: Cambridge University Press, 1992, 101123.
. Virtuosity and the Musical Work: The Transcendental Studies of Liszt. Cambridge:
Cambridge University Press, 2007.
Samuels, Robert. Music as Text: Mahler, Schumann and Issues in Analysis, in Theory, Analysis and Meaning in Music, ed. Anthony Pople. Cambridge: Cambridge University Press,
1994, 152163.
. Mahlers Sixth Symphony: A Study in Musical Semiotics. Cambridge: Cambridge University Press, 1995.
Saussure, Ferdinand de. Course in General Linguistics, ed. C. Bally and A. Sechehaye. New
York: McGraw-Hill, 1966 (original 1915).
Schenker, Heinrich. Das Meisterwerk in der Musik, vol. 2. Munich: Drei Masken, 1926.
. Five Graphic Music Analyses, ed. Felix Salzer. New York: Dover, 1969.
. Free Composition, trans. Ernst Oster. New York: Longman, 1979.
. Counterpoint: A Translation of Kontrapunkt, trans. John Rothgeb and Jrgen Thym.
New York: Schirmer, 1987.
. Der Tonwille: Pamphlets in Witness of the Immutable Laws of Music, vol. 1, ed. William
Drabkin, trans. Ian Bent et al. Oxford: Oxford University Press, 2004.
Schoenberg, Arnold. Fundamentals of Musical Composition. New York: St. Martins Press,
1967.
Sechter, Simon. Analysis of the Finale of Mozarts Symphony no. [41] in C [K551(Jupiter)],
excerpted in Music Analysis in the Nineteenth Century, vol. 1: Fugue, Form and Style, ed.
Ian D. Bent. Cambridge: Cambridge University Press, 1994, 8296.
Sheinbaum, John J. Timbre, Form and Fin-de-Sicle Refractions in Mahlers Symphonies.
Ph.D. diss., Cornell University, 2002.
Silbiger, Alexander. Il chitarrino le suoner: Commedia dellarte in Mozarts Piano Sonata
K. 332. Paper presented at the annual meeting of the Mozart Society of America, Kansas
City, November 5, 1999.
Sisman, Elaine R. Brahms Slow Movements: Reinventing the Closed Forms, in Brahms
Studies, ed. George Bozarth. Oxford: Oxford University Press, 1990, 79103.
. Haydn and the Classical Variation. Cambridge, MA: Harvard University Press, 1993.
. Mozart: The Jupiter Symphony. Cambridge: Cambridge University Press, 1993.
. Genre, Gesture and Meaning in Mozarts Prague Symphony, in Mozart Studies, vol.
2, ed. Cliff Eisen. Oxford: Oxford University Press, 1997, 2784.
Smith, Peter H. Expressive Forms in Brahms Instrumental Music: Structure and Meaning in
His Werther Quartet. Bloomington: Indiana University Press, 2005.
Spitzer, John. Grammar of Improvised Ornamentation: Jean Rousseaus Viol Treatise of
1687. Journal of Music Theory 33 (1989): 299332.
330
Bibliography
Spitzer, Michael. Metaphor and Musical Thought. Chicago: University of Chicago Press, 2004.
Stell, Jason T. The Flat-7th Scale Degree in Tonal Music. Ph.D. diss., Princeton University,
2006.
Straus, Joseph N. A Principle of Voice Leading in the Music of Stravinsky. Music Theory
Spectrum 4 (1982): 106124.
Tarasti, Eero. A Theory of Musical Semiotics. Bloomington: Indiana University Press, 1994.
. Signs of Music: A Guide to Musical Semiotics. Berlin: de Gruyter, 2002.
Tarasti, Eero, ed. Musical Semiotics in Growth. Bloomington: Indiana University Press,
1996.
. Musical Semiotics Revisited. Helsinki: International Semiotics Institute, 2003.
Temperley, David. Communicative Pressure and the Evolution of Musical Styles. Music
Perception 21 (2004): 313337.
Tovey, Donald Francis. A Musician Talks, vol. 2: Musical Textures. Oxford: Oxford University
Press, 1941.
. Essays and Lectures on Music. Oxford: Oxford University Press, 1949.
Urban, Greg. Ritual Wailing in Amerindian Brazil. American Anthropologist 90 (1988):
385400.
Vaccaro, Jean-Michel. Proposition dun analyse pour une polyphonie vocale dux vie sicle.
Revue de musicology 61 (1975): 3558.
van den Toorn, Pieter. The Music of Stravinsky. New Haven, CT: Yale University Press, 1983.
Wallace, Robin. Background and Expression in the First Movement of Beethovens op. 132.
Journal of Musicology 7 (1989): 320.
Walsh, Stephen. Stravinsky: A Creative Spring: Russia and France, 18821934. Berkeley: University of California Press, 1999.
Watson, Derek. Liszt. London: Dent, 1989.
White, Eric Walter. Stravinsky: The Composer and His Works. Berkeley: University of California Press, 1966.
Whittall, Arnold. Romantic Music: A Concise Survey from Schubert to Sibelius. London:
Thames and Hudson, 1987.
. Musical Composition in the Twentieth Century. Oxford: Oxford University Press,
2000.
Whitworth, Paul John. Aspects of Mahlers Musical Style: An Analytical Study. Ph.D. diss.,
Cornell University, 2002.
Williamson, John. Mahler, Hermeneutics and Analysis. Music Analysis 10 (1991): 357373.
. Music of Hans Pfitzner. Oxford: Clarendon, 1992.
. Dissonance and Middleground Prolongations in Mahlers Later Music, in Mahler
Studies, ed. Stephen E. Hefling. Cambridge: Cambridge University Press, 1997, 248270.
Wilson, Paul. Concepts of Prolongation and Bartks opus 20. Music Theory Spectrum 6
(1984): 7989.
Wintle, Christopher., 2969.
Wintle, Christopher, ed. Hans Keller (19191985): A Memorial Symposium. Music Analysis 5 (1986): 343440.
Wood, Patrick. Paganinis Classical Concerti. Unpublished paper.
Zbikowski, Lawrence M. Conceptualizing Music: Cognitive Structure, Theory, and Analysis.
New York: Oxford University Press, 2002.
. Musical Communication in the Eighteenth Century, in Communication in Eighteenth-Century Music, ed. Danuta Mirka and Kofi Agawu. Cambridge: Cambridge University Press, 2008, 283309.
INDEX
Bartk, Bla, 3, 49
Improvisations for Piano, op. 20,
no. 1, 9093
Music for Strings, Percussion and
Celesta, 71
Becker, Judith and Alton, 16
Beethoven, 4, 6, 7, 8, 17, 54, 93, 224,
274, 276
Piano Sonata in C major op. 53
(Waldstein), first movement,
127128
Piano Sonata in E-flat major op. 81a
(Les Adieux), 257
String Quartet in D major, op. 18, no.
3, first movement, 184198, 290
String Quartet in F major, op. 59,
no. 1, first movement, 97
String Quartet in F major, op. 59,
no. 1, third movement, 9798
String Quartet in B-flat major, op. 130,
first movement, 12, 281301
String Quartet in B-flat major, op. 130,
third movement, 99101
String Quartet in A minor, op. 132, first
movement, 9495
String Quartet in A minor, op. 132,
fourth movement, 101
Symphony No. 3 (Eroica), 115
Symphony No. 3 (Eroica), slow
movement, 42
Symphony No. 4, second movement,
245
Symphony No. 5, first movement, 25,
103104
Symphony No. 5, finale, 52
See also late Beethoven
Beethoven and Stravinsky compared,
312315
331
332
Index
Index
Eitan, Zohar, 62n.31
Everett, Daniel, 21n.21
Ewe (African people), 22
Fabian, Johannes, 18n.5
Feil, Arnold, 82n.5
Feld, Steve, and Aaron Fox, 17, 22
Floros, Constantin, 47n.11, 254, 273
foreground vs. background, 11, 111, 113
form, 78, 6061
in J. S. Bach, 149153
as beginning, middle, and ending, 52
in Liszt, 220226
in Mahler, 262, 271275, 277
paradigmatic approach to, 166167
as periodicity, 7677
as prolongation, 130131
in Stravinsky, 309330, 312, 314
formalism, 5, 6
Forte, Allen, 313n.7
and Steven Gilbert, 151
fragments, 33, 291292
Frisch, Walter, 239240, 252
functional music, 23
Fux, Johann, 114
generative analysis, 3334, 113, 122129
Gluck, Orfeo ed Euridice, 224
God Save the King, 168175
Grbocz, Mrta, 9, 46, 49, 103
ground bass, 8586
Handel, Suites de pieces, 2nd collection,
no. 1, Air with Variations, 115116
Hanslick, Eduard, 189190
harmony
in paradigmatic analysis, 172
vs. melody, 25
Hasty, Christopher, 76, 184
Hatten, Robert, 4, 9, 43, 45, 50n.15, 93n.8,
113n.4
Hauptstimme, 259, 279, 314
Haydn, Franz Joseph, 22
Chaos, from The Creation, 112
Hefling, Stephen, 254n.2, 273
Heine, Heinrich, 85, 93, 104
Henrotte, Gayle, 21, 27
Henschel, Georg, 203204
Hepokoski, James, and Warren Darcy, 8,
75, 166n.5
333
334
Index
Index
Paganini, Niccol, 17, 46n.10
paradigm (vs. syntagm), 163164. See
also paradigmatic analysis
paradigmatic analysis, 12, 163207, 252,
253, 271272, 274275, 284, 286,
301, 304305, 309310
parenthesis, 11, 9598, 259, 269
periodicity, 7, 11, 7598
Perrey, Beate, 106n.16
Plato, 15
polyphonic melody, 313n.7
Pople, Anthony, 320
Powers, Harold, 16, 17, 20, 21
preference rules (vs. well-formedness
rules), 19
prolongation, 9697, 122125, 158160.
See also prolonged counterpoint
prolonged counterpoint, 129162
prototypes, 11, 131
public (vs. private), 4243
Puccini, Giacomo
Tosca, 23
Puffett, Derrick, 33n.45
question vs. answer, 3536, 241, 283, 292
Rameau, Jean-Philippe, 116
Ratner, Leonard, 3, 5, 4142, 43, 45, 75,
176, 196, 283, 317, 318, 320
Ratz, Erwin, 128
recitative vs. aria, 22, 101
recomposition, 3637, 248
Reed, John, 82n.5
Reger, Max, 120
repetition, 12, 31, 281, 300, 301, 305,
312313, 319
Reynolds, Christopher, 50n.14
rhetoric, 106, 158160, 242, 285,
301, 313
rhythm, 312
of theoretical system, 6869
ricercar, 283
Richards, Paul, 21n23
Richter, 114
Riemann, Hugo, 16, 75
Riezler, Walter, 312
Rosand, Ellen, 146
Rosen, Charles, 9, 52, 75, 166, 224
Rothstein, William, 8, 75, 136n.27
Rousseau, Jean, 17
335
Rowell, Lewis, 52
Ruwet, Nicolas, 16, 21, 165, 319
Saint Augustine, 15
salience (in Romantic music), 11, 318
Salzer, Felix
Structural Hearing, 38, 118, 313n.7
Samson, Jim, 46n.10
Saussure, Ferdinand, 25, 71
Schenker, Heinrich, 3, 5, 6, 8, 11, 18, 29,
51, 58n.29, 69n.36, 112, 113118,
161, 175, 313, 317, 318, 320
Schneider, Marius, 16
Schoenberg, Arnold, 8, 75, 240, 314
Schubert, Franz, 3, 4, 6, 17, 54, 98
An die Musik, 6365
Dass sie hier gewesen, 135136
Wo ist Sylvia? 80
Der greise Kopf, from Winterreise, 10
Frhlingstraum, from Winterreise, 119
Der Lindenbaum, from Winterreise,
121122
Im Dorfe, from Winterreise,
8285
Piano sonata in C minor D. 958,
Adagio, 3035
String Quintet in C major, first
movement, 76, 118119, 128129
String Quintet in C major, second
movement, 121
Schumann, Robert, 3, 98, 276
Dichterliebe, 101, 104106
Ich grolle nicht, from Dichterliebe,
8587
Frauenliebe und Leben, 42
Fantasy for piano op. 17, 52
Piano Quintet op. 44, 76
Der Dichter spricht, from
Kinderszenen, 101102
Sechter, Simon, 125126, 129
segmentation, 184188, 240, 302,
33n.45
criteria for, 254256
of music and language, 24
semiology and Schenkerian theory, 113
Sheinbaum, John, 255n.3
Singer, Otto, 240n.2
Sisman, Elaine, 7
Smetana, Bedrich, 276
sonata deformation, 166
336
Index | https://id.scribd.com/doc/314994182/Music-as-Discourse | CC-MAIN-2019-30 | en | refinedweb |
Overview.
RHEL Atomic
RHEL Atomic is an optimized container operating system based on Red Hat Enterprise Linux 7 (RHEL 7). The name atomic refers to how updates are managed. RHEL Atomic does not use yum but rather OSTree for managing updates. Software updates are handled atomically across the entire system. Not only this but you can rollback to the systems previous state if the new upgraded state is for some reason not desired. The intention is to reduce risk during upgrades and make the entire process seamless. When we consider the density of containers vs virtual machines to be around 10X, upgrades and maintenance become that much more critical.
RHEL Atomic provides both Docker and Kubernetes. Underneath the hood it leverages SELinux (security), Cgroups (process isolation) and Namespaces (network isolation). It is an Operating System that is optimized to run containers. In addition RHEL Atomic provides enterprise features such as security, isolation, performance and management to the containerized world.
Docker
Docker is often a misused term when referring to containers. Docker is not a container, instead it is a platform for running containers. Docker provides a packaging format, tool-set and all the plumbing needed for running containers within a single host. Docker also provides a hub for sharing Docker images.
Docker images consist of a Base-OS and various layers that allow one to build an application stack (application and its dependencies). Docker images are immutable, you don’t update them. Instead you create a new image by adding or making changes to the various layers. This is the future of application deployment and is not only more efficient but magnitudes faster than the traditional approach with virtual machines.
Red Hat is providing a docker repository for certified, tested, secure and supported Docker images similar to how RPMs are currently provided.
All Docker images run in a container and all the containers share the same Linux kernel, RHEL Atomic.
Kubernetes
Kubernetes is an orchestration engine built around Docker. It allows administrators to manage Docker containers at scale across many physical or virtual hosts. Kubernetes has three main components: master, node or minion and pod.
Master
The Kubernetes master is the control plane and provides several services. The scheduler handles placement of pods. It also provides a replication controller that ensures pods are replicated according to policy. The master also maintains the state of the cluster and relies on ETCD which is a distributed key/value store for those capabilities. Finally Restful APIs for performing operations on nodes, pods, replication controllers and services are provided by the Kubernetes master.
Node
The Kubernetes node or minion as it is often referred to runs pods. Placement of pod on a Kubernetes node is as mentioned determined by the scheduler on the Kubernetes master. The Kubernetes node runs several important services: kubelet and kube-proxy. The kubelet is responsible for node level pod management. In addition Kubernetes allows for the creation of services that expose applications to the outside world. The kube-proxy is responsible for managing Kubernetes services within a node. Since pods are meant to be mortal, the idea behind services is providing an abstraction that lives independently of a pod.
Pod
The Kubernetes pod is one or more tightly coupled containers that are scheduled onto the same host. Containers within pods share some resources such as storage and networking. A pod provides a single unit of horizontal scaling and replication across the Kubernetes cluster.
Now that we have a good feel for the components involved it is time to sink our teeth into Kubernetes. First I would like to recognize two colleagues Sebastian Hetze and Scott Collier. I have used their initial work around Kubernetes configurations in this article as my basis.
Configure Kubernetes Nodes in OpenStack
Kubernetes nodes or minions can be deployed and configured automatically on OpenStack. If more compute power is required for our container infrastructure we simply need to deploy additional Kubernetes nodes. OpenStack is the perfect infrastructure for running containers at scale. Below are the steps required to deploy Kubernetes nodes on OpenStack.
- Download the RHEL Atomic cloud image (QCOW2)
- Add RHEL Atomic Cloud Image to Glance in OpenStack
- Create atomic security group
#neutron security-group-create atomic --description "RHEL Atomic security group"
#neutron security-group-rule-create atomic --protocol tcp --port-range-min 10250 --port-range-max 10250 --direction ingress --remote-ip-prefix 0.0.0.0/0
#neutron security-group-rule-create atomic --protocol tcp --port-range-min 4001 --port-range-max 4001 --direction egress --remote-ip-prefix 0.0.0.0/0
#neutron security-group-rule-create atomic --protocol tcp --port-range-min 5000 --port-range-max 5000 --direction egress --remote-ip-prefix 0.0.0.0/0
#neutron security-group-rule-create --protocol icmp --direction ingress default
- Create user-data to automate deployment using cloud-init
#cloud-config hostname: atomic01.lab.com password: redhat ssh_pwauth: True chpasswd: { expire: False } ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfxcho9SipUCokS29C+AJNNLcrfpT4xsu9aErax3XSNThWbiJehUDufe86ZO4lqib4dekDEL6d7vBa3WlalzJaq/p/sy1xjYdRNE0vHQCxuWgG+NaL8KcxXDhrUa0UHMW8k8hw9xzOGaRx35LRP9+B0fq/W572XPWwEPRJo8WtSKFiqJZEBkai1IcF0CErj30d0/va9c3EYqkCEWbxuIRL+qoysH+MgFbs1jjjrvfJCLiZZo95MWp4nDrmxYNlmwMIvYrsRZfygeyYPiqVzO51gmGxcVRTbqgG0fSRVRHjUE3E4VfW9wm1qn8+rEc0iQB6ER0f6U/wtEAUmvd/g4Ef ktenzer@ktenzer.muc.csb write_files: - content: | 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.2.15 atomic01.lab.com atomic01 192.168.2.16 atomic02.lab.com atomic02 192.168.2.17 atomic03.lab.com atomic03 192.168.2.14 kubernetes.lab.com kubernetes path: /etc/hosts permissions: '0644' owner: root:root - content: | ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # Comma seperated privleged docker containers KUBE_ALLOW_PRIV="--allow_privileged=false" path: /etc/kubernetes/config permissions: '0644' owner: root:root - content: | ### #="" # Add your own! KUBELET_ARGS=--cluster_domain=kubernetes.local --cluster_dns=10.254.0.10 path: /etc/kubernetes/kubelet permissions: '0644' owner: root:root - content: | # /etc/sysconfig/docker>/etc/issue final_message: "Cloud-init completed and the system is up, after $UPTIME seconds"
- Boot RHEL Atomic instances using the ‘nova boot’ CLI command
#nova boot --flavor m1.small --poll --image Atomic_7_1 --key-name atomic-key --security-groups prod-base,atomic --user_data user-data-openstack --nic net-id=e3f370ab-b6ac-4788-a739-7f8de8631518 Atomic1
- Associate floating-ip to the RHEL Atomic instance
#nova floating-ip-associate Atomic1 192.168.2.15
Of course you will want to update the cloud-init user-data as well as the CLI commands according to your environment. In this example I did not have DNS so I updated the /etc/hosts file directly but this step is not required. I also did not attach a Red Hat subscription something you would probably want to do using the ‘runcmd’ option in cloud-init.
Configure Kubernetes Master
Once Kubernetes nodes have been deployed we can configure the Kubernetes master. The Kubernetes master runs Kubernetes, Docker and ETCD services. In addition an overlay network is required. There are many options to create an overlay network, in this case we have chosen to use flannel to provide those capabilities. Finally for the base OS, a minimum install of a current RHEL-7 release is required.
- Register host with subscription-manager
#subscription-manager register attach --pool=<pool id> #subscription-manager repos --disable=* #subscription-manager repos --enable=rhel-7-server-rpms #subscription-manager repos --enable=rhel-7-server-extras-rpms #subscription-manager repos --enable=rhel-7-server-optional-rpms #yum -y update
- Install required packages
#yum -y install docker docker-registry kubernetes flannel
- Disable firewall
#systemctl stop firewalld #systemctl disable firewalld
- Enable required services
#for SERVICES in docker.service docker-registry etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do systemctl enable $SERVICES done
- Configure Docker
#vi /etc/sysconfig/docker INSECURE_REGISTRY='--insecure-registry kubernetes.lab.com:5000'
- Configure Kubernetes
#vi /etc/kubernetes/apiserver and set KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_MASTER="--master="
#vi /etc/kubernetes/config and set KUBE_ETCD_SERVERS="--etcd_servers="
#vi /etc/kubernets/controller-manager and set KUBELET_ADDRESSES="--machines=atomic01.lab.com,atomic02.lab.com,atomic03.lab.com"
- Configure Flannel
#vi /etc/sysconfig/flanneld and set FLANNEL_ETCD="" FLANNEL_ETCD_KEY="/flannel/network" FLANNEL_OPTIONS="eth0"
- Start ETCD
#systemctl start etcd
- Configure Flannel overlay network
#vi /root/flannel-config.json { "Network": "10.100.0.0/16", "SubnetLen": 24, "SubnetMin": "10.100.50.0", "SubnetMax": "10.100.199.0", "Backend": { "Type": "vxlan", "VNI": 1 } }
curl -L -XPUT --data-urlencode value@/root/flannel-config.json
- Load Docker Images
#systemctl start docker #systemctl start docker-registry #for IMAGES in rhel6 rhel7 fedora/apache; do docker pull $IMAGES docker tag $IMAGES kubernetes.lab.com:5000/$IMAGES docker push kubernetes.lab.com:5000/$IMAGES done
- Reboot host
systemctl reboot
Container Administration using Kubernetes
Kubernetes provides a CLI and a Restful API for management. Currently there is no GUI. In a future article I will go into detail about using the API in order to build your own UI or integrate Kubernetes in existing dashboards. For the purpose of this article we will focus on kubectl, the Kubernetes CLI.
Deploy an Application
In this example we will deploy an Apache web server pod. Before deploying a pod we must ensure that Kubernetes nodes (minions) are ready.
[root@kubernete ~]# kubectl get minions NAME LABELS STATUS atomic01.lab.com <none> Ready atomic02.lab.com <none> Ready atomic03.lab.com <none> Ready
Next we need to create a JSON file for deploying a pod. The kubectl command uses JSON as input to make configuration updates and changes.
[root@kube-master ~]# vi apache-pod.json
{ "apiVersion": "v1beta1", "desiredState": { "manifest": { "containers": [ { "image": "fedora/apache", "name": "my-fedora-apache", "ports": [ { "containerPort": 80, "hostPort":80, "protocol": "TCP" } ] } ], "id": "apache", "restartPolicy": { "always": {} }, "version": "v1beta1", "volumes": null } }, "id": "apache", "kind": "Pod", "labels": { "name": "apache" }, "namespace": "default" }
[root@kube-master ~]# kubectl create -f apache-pod.json
We can now get the status of our newly created Apache pod.
[root@kubernetes ~]# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS apache 10.100.119.6 my-fedora-apache fedora/apache atomic02.lab.com/ name=apache Running
Notice that the pod is running on atomic02.lab.com. The Kubernetes scheduler takes care of scheduling the pod on a node.
Create Services
In Kubernetes services are used to provide external access to an application running in a pod. The idea is that since pods are mortal and transient in nature a service should provide abstraction so applications do not need to understand underlying pod or containers infrastructure. Services use the kube-proxy to access applications from any Kubernetes node configured as public IPs in the service itself. In the example below we are creating a service that will be available from all three of the Kubernetes nodes atomic01.lab.com, atomic02.lab.com and atomic03.lab.com. The pod is running on atomic02.lab.com. Similar to pods, services also requirec a JSON file as input to kubectl.
[root@kubernetes ~]# vi apache-service.json
{ "apiVersion": "v1beta1", "containerPort": 80, "id": "frontend", "kind": "Service", "labels": { "name": "frontend" }, "port": 80, "publicIPs": [ "192.168.2.15","192.168.2.16","192.168.2.17" ], "selector": { "name": "apache" } }
[root@kube-master ~]# kubectl create -f apache-service.json
We can now get the status of our newly created apache-frontend service.
[root@kubernetes ~]# kubectl get services NAME LABELS SELECTOR IP PORT apache-frontend name=apache-frontend name=apache 10.254.94.252 80
As one would expect, we can access our Apache pod externally through any of our three Kubernetes nodes.
[root@kubernetes ~]# curl Apache
Creating Replication Controllers
So far we have seen how to create a pod containing one or more containers and build a service to expose the application externally. If we want to scale our application horizontally however we need to create a replication controller. In Kubernetes replication controllers are pods that have a replication policy. Kubernetes will create multiple pods across the cluster and a pod is our base unit of scaling. In the example below we will create a replication controller for our Apache web server that will ensure three replicas. The same service we already created can be used but this time an Apache pod will be running on each Kubernetes node. In our previous example we only had one Apache web server on atomic02.lab.com and though we could access it through any node it was done through the kube-proxy.
[root@kubernetes ~]# vi apache-replication-controller.json
{ "apiVersion": "v1beta1", "desiredState": { "podTemplate": { "desiredState": { "manifest": { "containers": [ { "image": "fedora/apache", "name": "my-fedora-apache", "ports": [ { "containerPort": 80, "hostPort": 80, "protocol": "TCP" } ] } ], "id": "apache", "restartPolicy": { "always": {} }, "version": "v1beta1", "volumes": null } }, "labels": { "name": "apache" } }, "replicaSelector": { "name": "apache" }, "replicas": 3 }, "id": "apache-controller", "kind": "ReplicationController", "labels": { "name": "apache" } }
[root@kube-master ~]# kubectl create -f apache-replication-controller.json
We can now get the status of our newly created Apache replication controller.
[root@kubernetes ~]# kubectl get replicationcontrollers CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS apache-controller my-fedora-apache fedora/apache name=apache 3
We can also see that the replication controller created three pods as expected.
[root@kubernetes ~]# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS fb9936f3-e21d-11e4-ad6e-000c295b1de9 10.100.119.6 my-fedora-apache fedora/apache atomic03.bigred.com/ name=apache Running fb9acf1a-e21d-11e4-ad6e-000c295b1de9 10.100.65.6 my-fedora-apache fedora/apache atomic02.bigred.com/ name=apache Running fb97a111-e21d-11e4-ad6e-000c295b1de9 10.100.147.6 my-fedora-apache fedora/apache atomic01.bigred.com/ name=apache Running
Summary
In this article we discussed the different components required to run application containers at scale: RHEL Atomic, Docker and Kubernetes. We also saw how to deploy Kubernetes RHEL Atomic nodes on OpenStack. Having scalable application containers means little if your infrastructure underneath cannot scale and that is why OpenStack should be key to any enterprise container strategy. Finally we went into a lot of detail on how to configure Kubernetes pods, services and replication controllers. Running application containers at scale in the enterprise is a lot more than just Docker. It has only been until very recently that these best-of-breed open source technologies have come together and allowed such wonderful possibilities. This is a very exciting time, containers will change everything about how we deploy, run and manage our applications. Hopefully you found this article interesting and useful. If you have any feedback I would really like to hear it, please share.
Happy Containerizing!
(c) 2015 Keith Tenzer
Reblogged this on iJuned.
Hi Keith,
great article, I have a question if you don’t mind: in terms of performance shouldn’t the RHEL atomic host run on baremetal (maybe as compute node), instead of a virtualized nova server? Because you’ll have the docker containers running inside a VM(RHEL Atomic Host) if I understand correctly, doesn’t this present some overhead issues?
Thanks
Hi,
Yes certainly if you need and want the best performance, baremetal will definitely help as you bypass virtualization overhead (15-20%). However there are reasons why you would want to have virtualization + containers. For example live migration is one use case. Containers cant be migrated but an Atomic Host could. Another use case is flexibility and auto-scaling environment through heat and OpenStack. If you need more compute power for container farm and this needs to be dynamic then heat could auto-provision more atomic hosts. Let me know if you want to have more detailed discussion about your use cases?
Hi,
thanks for your reply. I would consider to use it for some telco deployments where live migration is not important but autoscaling and performance are, at least in the network stack, I could use SR-IOV for atomic hosts instead of OVS. Other idea could be to provision the baremetal atomic host using heat+Ironic and then run containers on it.
Thanks,
Pedro Sousa | https://keithtenzer.com/2015/04/15/containers-at-scale-with-kubernetes-on-openstack/ | CC-MAIN-2019-30 | en | refinedweb |
Flutter plugin for interacting with iOS StoreKit and Android Billing Library.
Work in progress.
The main difference is that instead of providing unified interface for in-app purchases on iOS and Android, this plugin exposes two separate APIs.
There are several benefits to this approach:
StoreKitfor iOS follows native interface in 99% of cases.
BillingClientfor Android is very similar as well, but also simplifies some parts of native protocol (mostly replaces listeners with Dart
Futures).
All Dart code is thoroughly documented with information taken directly from Apple Developers website (for StoreKit) and Android Developers website (for BillingClient).
Note that future versions may introduce unified interfaces for specific use cases, for instance, handling of in-app subscriptions.
Plugin currently implements all native APIs except for downloads. If you are looking for this functionality consider submitting a pull request or leaving your 👍 here.
Interacting with StoreKit in Flutter is almost 100% identical to the native ObjectiveC interface.
Make sure to
Checkout a complete example of interacting with StoreKit in the example app in this repo. Note that in-app purchases is a complex topic and it would be really hard to cover everything in a simple example app like this, so it is highly recommended to read official documentation on setting up in-app purchases for each platform.
final productIds = ['my.product1', 'my.product2']; final SKProductsResponse response = await StoreKit.instance.products(productIds); print(response.products); // list of valid [SKProduct]s print(response.invalidProductIdentifiers) // list of invalid IDs
// Get receipt path on device final Uri receiptUrl = await StoreKit.instance.appStoreReceiptUrl; // Request a refresh of receipt await StoreKit.instance.refreshReceipt();
Payments and transactions are handled within
SKPaymentQueue.
It is important to set an observer on this queue as early as possible after your app launch. Observer is responsible for processing all events triggered by the queue. Create an observer by extending following class:
abstract class SKPaymentTransactionObserver { void didUpdateTransactions(SKPaymentQueue queue, List<SKPaymentTransaction> transactions); void didRemoveTransactions(SKPaymentQueue queue, List<SKPaymentTransaction> transactions) {} void failedToRestoreCompletedTransactions(SKPaymentQueue queue, SKError error) {} void didRestoreCompletedTransactions(SKPaymentQueue queue) {} void didUpdateDownloads(SKPaymentQueue queue, List<SKDownload> downloads) {} void didReceiveStorePayment(SKPaymentQueue queue, SKPayment payment, SKProduct product) {} }
See API documentation for more details on these methods.
Make sure to implement
didUpdateTransactions and process all transactions
according to your needs. Typical implementation should normally look like this:
void didUpdateTransactions( SKPaymentQueue queue, List<SKPaymentTransaction> transactions) async { for (final tx in transactions) { switch (tx.transactionState) { case SKPaymentTransactionState.purchased: // Validate transaction, unlock content, etc... // Make sure to call `finishTransaction` when done, otherwise // this transaction will be redelivered by the queue on next application // launch. await queue.finishTransaction(tx); break; case SKPaymentTransactionState.failed: // ... await queue.finishTransaction(tx); break; // ... } } }
Before attempting to add a payment always check if the user can actually make payments:
final bool canPay = await StoreKit.instance.paymentQueue.canMakePayments();
When that's verified and you've set an observer on the queue you can add payments. For instance:
final SKProductsResponse response = await StoreKit.instance.products(['my.inapp.subscription']); final SKProduct product = response.products.single; final SKPayment = SKPayment.withProduct(product); await StoreKit.instance.paymentQueue.addPayment(payment); // ... // Use observer to track progress of this payment...
await StoreKit.instance.paymentQueue.restoreCompletedTransactions(); /// Optionally implement `didRestoreCompletedTransactions` and /// `failedToRestoreCompletedTransactions` on observer to track /// result of this operation.
This plugin wraps official Google Play Billing Library.
Use
BillingClient class as the main entry point.
Constructor of
BillingClient class expects an instance of
PurchaseUpdatedListener interface
which looks like this:
/// Listener interface for purchase updates which happen when, for example, /// the user buys something within the app or by initiating a purchase from /// Google Play Store. abstract class PurchasesUpdatedListener { /// Implement this method to get notifications for purchases updates. /// /// Both purchases initiated by your app and the ones initiated by Play Store /// will be reported here. void onPurchasesUpdated(int responseCode, List<Purchase> purchases); }
BillingClient
To begin working with Play Billing service always start from establishing connection using
startConnection method:
import 'package:iap/iap.dart'; bool _connected = false; void main() async { final client = BillingClient(yourPurchaseListener); await client.startConnection(onDisconnect: handleDisconnect); _connected = true; // ...fetch SKUDetails, launch billing flows, query purchase history, etc await client.endConnection(); // Always call [endConnection] when work with this client is done. } void handleDisconnect() { // Client disconnected. Make sure to call [startConnection] next time before invoking // any other method of the client. _connected = false; }
Initial release provides implementations of both iOS StoreKit library and Android Play Billing library.
example/README.md
Demonstrates how to use the iap plugin.
For help getting started with Flutter, view our online documentation.
Add this to your package's pubspec.yaml file:
dependencies: iap: ^0.1.0
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:iap/iap.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Fix
lib/src/billing_client.dart. (-6.31 points)
Analysis of
lib/src/billing_client.dart reported 13 hints, including:
line 51 col 10: The class 'Future' was not exported from 'dart:core' until version 2.1, but this code is required to be able to run on earlier versions.
line 104 col 3: The class 'Future' was not exported from 'dart:core' until version 2.1, but this code is required to be able to run on earlier versions.
line 121 col 3: The class 'Future' was not exported from 'dart:core' until version 2.1, but this code is required to be able to run on earlier versions.
line 136 col 3: The class 'Future' was not exported from 'dart:core' until version 2.1, but this code is required to be able to run on earlier versions.
line 152 col 3: The class 'Future' was not exported from 'dart:core' until version 2.1, but this code is required to be able to run on earlier versions. | https://pub.dev/packages/iap | CC-MAIN-2019-30 | en | refinedweb |
Consider the case of a long-running business process or workflow, comprised of multiple execution sequences, that lasts many days or even weeks.
I use the term workflow to denote a business workflow in general, not one that is necessarily supported by or related to the Windows Workflow Foundation.
Such long-running processes may involve clients (or even end users) that connect to the application, perform a finite amount of work, transition the workflow to a new state, and then disconnect for an indeterminate amount of time before connecting again and continuing to execute the workflow. The clients may at any point also decide to terminate the workflow and start a new one, or the backend service supporting the workflow may end it. Obviously, there is little point in keeping proxies and services in memory waiting for the clients to call. Such an approach will not robustly withstand the test of time; at the very least, timeout issues will inevitably terminate the connection, and there is no easy way to allow machines on both sides to reboot or log off. The need to allow the clients and the services to have independent lifecycles is an important one in a long-running business process, because without it there is no way to enable the clients to connect, perform some work against the workflow, and disconnect. On the host side, over time you may even want to redirect calls between machines.
The solution for long-running services is to avoid keeping the service state in memory, and to handle each call on a new instance with its own temporary in-memory state. For every operation, the service should retrieve its state from some durable storage (such as a file or a database), perform the requested unit of work for that operation, and then save the state back to the durable storage at the end of the call. Services that follow this model are called durable services. Since the durable storage can be shared across machines, using durable services also gives you the ability to route calls to different machines at different times, be it for scalability, redundancy, or maintenance purposes.
This approach to state management for durable services is very much like the one proposed previously for per-call services, which proactively manage their state. Using per-call services makes additional sense because there is no point in keeping the instance around between calls if its state is coming from durable storage. The only distinguishing aspect of a durable service compared with a classic per-call service is that the state repository needs to be durable.
While in theory nothing prevents you from basing a durable service on a sessionful or
even a singleton service and having that service manage its state in and out of the
durable storage, in practice this would be counterproductive. In the case of a sessionful
service, you would have to keep the proxy open on the client side for long periods of
time, thus excluding clients that terminate their connections and then reconnect. In the
case of a singleton service, the very notion of a singleton suggests an infinite lifetime
with clients that come and go, so there is no need for durability. Consequently, the
per-call instantiation mode offers the best choice all around. Note that with durable
per-call services, because the primary concern is long-running workflows rather than
scalability or resource management, supporting
IDisposable is optional. It is also worth pointing out that the presence of a
transport session is optional for a durable service, since there is no need to maintain a
logical session between the client and the service. The transport session will be a facet
of the transport channel used and will not be used to dictate the lifetime of the
instance.
When the long-running workflow starts, the service must first write its state to the durable storage, so that subsequent operations will find the state in the storage. When the workflow ends, the service must remove its state from the storage; otherwise, over time, the storage will become bloated with instance state not required by anyone.
Since a new service instance is created for every operation, an instance must have a
way of looking up and loading its state from the durable storage. The client must
therefore provide some state identifier for the instance. That identifier is called the
instance ID. To support clients that connect to the service only
occasionally, and client applications or even machines that recycle between calls, as long
as the workflow is in progress the client will typically save the instance ID in some
durable storage on the client side (such as a file) and provide that ID for every call.
When the workflow ends, the client can discard that ID. For an instance ID, it is
important to select a type that is serializable and equatable. Having a serializable ID is
important because the service will need to save the ID along with its state into the
durable storage. Having an equatable ID is required in order to allow the service to
obtain the state from the storage. All the .NET primitives (such as
int,
string, and
Guid) qualify as instance IDs.
The durable storage is usually some kind of dictionary that pairs the instance ID with the instance state. The service typically will use a single ID to represent all its state, although more complex relationships involving multiple keys and even hierarchies of keys are possible. For simplicity's sake, I will limit the discussion here to a single ID. In addition, the service often uses a dedicated helper class or a structure to aggregate all its member variables, and stores that type in and retrieves it from the durable storage. Finally, access to the durable storage itself must be thread-safe and synchronized. This is required because multiple instances may try to access and modify the store concurrently.
To help you implement and support simple durable services, I wrote the
FileInstanceStore<ID,T> class:
public interface IInstanceStore<ID,T> where ID : IEquatable<ID> { void RemoveInstance(ID instanceId); bool ContainsInstance(ID instanceId); T this[ID instanceId] {get;set;} } public class FileInstanceStore<ID,T> : IInstanceStore<ID,T> where ID : IEquatable<ID> { protected readonly string Filename; public FileInstanceStore(string fileName); //Rest of the implementation }
FileInstanceStore<ID,T> is a general-purpose
file-based instance store.
FileInstanceStore<ID,T> takes two type parameters: the
ID type parameter is constrained to be an equatable type, and
the
T type parameter represents the instance state.
FileInstanceStore<ID,T> verifies at runtime in
a static constructor that both
T and
ID are serializable types.
FileInstanceStore<ID,T> provides a simple
indexer allowing you to read and write the instance state to the file. You can also remove
an instance state from the file, and check whether the file contains the instance state.
These operations are defined in the
IInstanceStore<ID,T> interface. The implementation of
FileInstanceStore<ID,T> encapsulates a dictionary, and
on every access it serializes and deserializes the dictionary to and from the file. When
FileInstanceStore<ID,T> is used for the first
time, if the file is empty
FileInstanceStore<ID,T> will initialize it with an empty
dictionary.
The simplest way a client can provide the instance ID to the service is as an explicit parameter for every operation designed to access the state. Example 4-9 demonstrates such a client and service, along with the supporting type definitions.
Example 4-9. Passing explicit instance IDs
[DataContract] class SomeKey : IEquatable<SomeKey> {...} [ServiceContract] interface IMyContract { [OperationContract] void MyMethod(SomeKey instanceId); } //Helper type used by the service to capture its state [Serializable] struct MyState {...} [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class MyService : IMyContract { public void MyMethod(SomeKey instanceId) { GetState(instanceId); DoWork( ); SaveState(instanceId); } void DoWork( ) {...} //Get and set MyState from durable storage void GetState(SomeKey instanceId) {...} void SaveState(SomeKey instanceId) {...} }
To make Example 4-9 more concrete, consider Example 4-10, which supports a pocket calculator with durable memory stored in a file.
Example 4-10. Calculator with explicit instance ID
[ServiceContract] interface ICalculator { [OperationContract] double Add(double number1,double number2); /* More arithmetic operations */ //Memory management operations [OperationContract] void MemoryStore(
string instanceId,double number); [OperationContract] void MemoryClear(
string instanceId); } (string instanceId,double number) { lock(typeof(MyCalculator)) {
Memory[instanceId] = number;} } public void MemoryClear(string instanceId) { lock(typeof(MyCalculator)) { Memory.
RemoveInstance(instanceId); } } //Rest of the implementation }
In Example 4-10, the filename is available
in the properties of the project in the
Settings class.
All instances of the calculator use the same static memory, in the form of a
FileInstanceStore<string,double>. The calculator
synchronizes access to the memory in every operation across all instances by locking on
the service type. Clearing the memory signals to the calculator the end of the workflow,
so it purges its state from the storage.
Instead of explicitly passing the instance ID, the client can provide the instance ID
in the message headers. Using message headers as a technique for passing out-of-band
parameters used for custom contexts is described in detail in Appendix B. In this case, the client can use my
HeaderClientBase<T,H> proxy class, and the service can
read the ID in the relevant operations using my
GenericContext<H> helper class. The service can use
GenericContext<H> as-is or wrap it in a dedicated
context.
The general pattern for this technique is shown in Example 4-11.
Example 4-11. Passing instance IDs in message headers
[ServiceContract] interface IMyContract { [OperationContract] void MyMethod( ); } //Client-side class MyContractClient :
HeaderClientBase< IMyContract,
SomeKey>,IMyContract { public MyContractClient(SomeKey instanceId) {} public MyContractClient(SomeKey instanceId,string endpointName) : base(instanceId,endpointName) {} //More constructors public void MyMethod( ) { Channel.MyMethod(); } } //Service-side [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class MyService : IMyContract { public void MyMethod( ) { SomeKey instanceId = GenericContext<SomeKey>.Current.Value; ... } //Rest same as Example 4-9 }
Again, to make Example 4-11 less abstract, Example 4-12 shows the calculator using the message headers technique.
Example 4-12. Calculator with instance ID in headers
[ServiceContract] interface ICalculator { [OperationContract] double Add(double number1,double number2); /* More arithmetic operations */ //Memory management operations [OperationContract] void MemoryStore(double number); [OperationContract] void MemoryClear( ); } //Client-side class MyCalculatorClient : HeaderClientBase <ICalculator,
string>,ICalculator { public MyCalculatorClient(string instanceId) {} public MyCalculatorClient(string instanceId,string endpointName) : base(instanceId,endpointName) {} //More constructors public double Add(double number1,double number2) { return Channel.Add(number1,number2); } public void MemoryStore(double number) { Channel.MemoryStore(number); } //Rest of the implementation } //Service-side //If using GenericContext<T> is too raw, can encapsulate: class CalculatorContext { public static string Id { get { return GenericContext<string>.Current.Value ?? String.Empty; } } } (double number) { lock(typeof(MyCalculator)) { Memory[
CalculatorContext.Id] = number; } } public void MemoryClear( ) { lock(typeof(MyCalculator)) { Memory.RemoveInstance(
CalculatorContext.Id); } } //Rest of the implementation }
WCF provides dedicated bindings for passing custom context parameters. These bindings,
called context bindings, are also explained in Appendix B. Clients can use my
ContextClientBase<T> class to pass the instance ID over the context
binding protocol. Since the context bindings require a key and a value for every
contextual parameter, the clients will need to provide both to the proxy. Using the same
IMyContract as in Example 4-11, such a proxy will look like
this:
class MyContractClient :
ContextClientBase<IMyContract>,IMyContract { public MyContractClient(
stringkey,
stringinstanceId) : base(key,instanceId) {} public MyContractClient(
stringkey,
stringinstanceId,string endpointName) : base(key,instanceId,endpointName) {} //More constructors public void MyMethod( ) { Channel.MyMethod( ); } }
Note that the context protocol only supports strings for keys and values. Because the
value of the key must be known to the service in advance, the client might as well
hardcode the same key in the proxy itself. The service can then retrieve the instance ID
using my
ContextManager helper class (described in
Appendix B). As with message headers, the service can also
encapsulate the interaction with
ContextManager in a
dedicated context class.
Example 4-13 shows the general pattern for passing an instance ID over the context bindings. Note that the proxy hardcodes the key for the instance ID, and that the same ID is known to the service.
Example 4-13. Passing the instance ID over a context binding
//Client-side class MyContractClient : ContextClientBase<IMyContract>,IMyContract { public MyContractClient(string instanceId) : base(
"MyKey",instanceId) {} public MyContractClient(string instanceId,string endpointName) : base(
"MyKey",instanceId,endpointName) {} //More constructors public void MyMethod( ) { Channel.MyMethod( ); } } //Service-side [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class MyService : IMyContract { public void MyMethod( ) {
string instanceId = ContextManager.GetContext("MyKey");GetState(instanceId); DoWork( ); SaveState(instanceId); } void DoWork( ) {...} //Get and set state from durable storage void GetState(
stringinstanceId) {...} void SaveState(
stringinstanceId) {...} }
Example 4-14 shows the matching concrete calculator example.
Example 4-14. Calculator with instance ID over context binding
//Client-side class MyCalculatorClient : ContextClientBase<ICalculator>,ICalculator { public MyCalculatorClient(string instanceId) : base(
"CalculatorId",instanceId) {} public MyCalculatorClient(string instanceId,string endpointName) : base(
"CalculatorId",instanceId,endpointName) {} //More constructors public double Add(double number1,double number2) { return Channel.Add(number1,number2); } public void MemoryStore(double number) { Channel.MemoryStore(number); } //Rest of the implementation } //Service-side class CalculatorContext { public static string Id { get { return ContextManager.GetContext("CalculatorId") ?? String.Empty; } } } [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class MyCalculator : ICalculator { //Same as Example 4-12 }
The need to hardcode and know in advance the key used for the instance ID is a
liability. The context bindings were designed with durable services in mind, so every
context binding always contains an autogenerated instance ID in the form of a
Guid (in string format), accessible via the reserved key of
instanceId. The client and the service will see the
same value for the instance ID. The value is initialized once the first call on the
proxy returns, after the binding has had the chance to correlate it between the client
and the service. Like any other parameter passed over a context binding, the value of
the instance ID is immutable throughout the life of the proxy.
To streamline interacting with the standard instance ID, I extended
ContextManager with ID management methods, properties, and
proxy extension methods, as shown in Example 4-15.
Example 4-15. Standard instance ID management with ContextManager
public static class ContextManager { public const string InstanceIdKey = "instanceId"; public static Guid InstanceId { get { string id = GetContext(InstanceIdKey) ?? Guid.Empty.ToString( ); return new Guid(id); } } public static Guid GetInstanceId(IClientChannel innerChannel) { try { string instanceId = innerChannel.GetProperty<IContextManager>( ).GetContext( ) [
InstanceIdKey]; return new Guid(instanceId); } catch(KeyNotFoundException) { return Guid.Empty; } } public static void SetInstanceId(IClientChannel innerChannel,Guid instanceId) { SetContext(innerChannel,
InstanceIdKey,instanceId.ToString( )); } public static void SaveInstanceId(Guid instanceId,string fileName) { using(Stream stream = new FileStream(fileName,FileMode.OpenOrCreate,FileAccess.Write)) { IFormatter formatter = new BinaryFormatter( ); formatter.Serialize(stream,instanceId); } } public static Guid LoadInstanceId(string fileName) { try { using(Stream stream = new FileStream(fileName,FileMode.Open, FileAccess.Read)) { IFormatter formatter = new BinaryFormatter( ); return (Guid)formatter.Deserialize(stream); } } catch { return Guid.Empty; } } //More members }
ContextManager offers the
GetInstanceId( ) and
SetInstanceId(
) methods to enable the client to read an instance ID from and write it to
the context. The service uses the
InstanceId
read-only property to obtain the ID.
ContextManager
adds type safety by treating the instance ID as a
Guid and not as a
string. It also adds
error handling.
Finally,
ContextManager provides the
LoadInstanceId( ) and
SaveInstanceId( ) methods to read the instance ID from and write it to a
file. These methods are handy on the client side to store the ID between client
application sessions against the service.
While the client can use
ContextClientBase<T> (as in Example 4-13) to pass the standard ID, it is
better to tighten it and provide built-in support for the standard instance ID, as shown
in Example 4-16.
Example 4-16. Extending ContextClientBase<T> to support standard IDs
public abstract class ContextClientBase<T> : ClientBase<T> where T : class { public Guid InstanceId { get { return ContextManager.GetInstanceId(InnerChannel); } } public ContextClientBase(
GuidinstanceId) : this(ContextManager.InstanceIdKey,instanceId.ToString( )) {} public ContextClientBase(
GuidinstanceId,string endpointName) : this(ContextManager.InstanceIdKey,instanceId.ToString( ),endpointName) {} //More constructors }
Example 4-17 shows the calculator client and service using the standard ID.
Example 4-17. Calculator using standard ID
//Client-side class MyCalculatorClient : ContextClientBase<ICalculator>,ICalculator { public MyCalculatorClient( ) {} public MyCalculatorClient(Guid instanceId) : base(instanceId) {} public MyCalculatorClient(Guid instanceId,string endpointName) : base(instanceId,endpointName) {} //Rest same as Example 4-14 } //Service-side [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class MyCalculator : ICalculator { static IInstanceStore<
Guid,double> Memory = new FileInstanceStore<
Guid,double>(Settings.Default.MemoryFileName); public double Add(double number1,double number2) { return number1 + number2; } public void MemoryStore(double number) { lock(typeof(MyCalculator)) { Memory[
ContextManager.InstanceId] = number; } } public void MemoryClear( ) { lock(typeof(MyCalculator)) { Memory.RemoveInstance(
ContextManager.InstanceId); } } //Rest of the implementation }
All the techniques shown so far for durable services require a nontrivial amount of work by the service—in particular, providing a durable state storage and explicitly managing the instance state against it in every operation. Given the repetitive nature of this work, WCF can automate it for you, and serialize and deserialize the service state on every operation from an indicated state store, using the standard instance ID.
When you let WCF manage your instance state, it follows these rules:
If the client does not provide an ID, WCF will create a new service instance by exercising its constructor. After the call, WCF will serialize the instance to the state store.
If the client provides an ID to the proxy and the store already contains state matching that ID, WCF will not call the instance constructor. Instead, the call will be serviced on a new instance deserialized out of the state store.
When the client provides a valid ID, for every operation WCF will deserialize an instance out of the store, call the operation, and serialize the new state modified by the operation back to the store.
If the client provides an ID not found in the state store, WCF will throw an exception.
To enable this automatic durable behavior, WCF provides the
DurableService behavior attribute, defined as:
public sealed class DurableServiceAttribute : Attribute,IServiceBehavior,... {...}
You apply this attribute directly on the service class. Most importantly, the
service class must be marked either as serializable or as a data contract with the
DataMember attribute on all members requiring
durable state management:
[Serializable][DurableService] class MyService : IMyContract { /* Serializable member variables only */ public void MyMethod( ) { //Do work } }
The instance can now manage its state in member variables, just as if it were a regular instance, trusting WCF to manage those members for it. If the service is not marked as serializable (or a data contract), the first call to it will fail once WCF tries to serialize it to the store. Any service relying on automatic durable state management must be configured as per-session, yet it will always behave as a per-call service (WCF uses context deactivation after every call). In addition, the service must use one of the context bindings with every endpoint to enable the standard instance ID, and the contract must allow or require a transport session, but cannot disallow it. These two constraints are verified at service load time.
A service can optionally use the
DurableOperation
behavior attribute to instruct WCF to purge its state from the store at the end of the
workflow:
[AttributeUsage(AttributeTargets.Method)] public sealed class DurableOperationAttribute : Attribute,... { public bool CanCreateInstance {get;set;} public bool CompletesInstance {get;set;} }
Setting the
CompletesInstance property to
true instructs WCF to remove the instance ID from
the store once the operation call returns. The default value of the
CompletesInstance property is
false. In case the client does not provide an instance ID, you can also
prevent an operation from creating a new instance by setting the
CanCreateInstance property to
false. Example 4-18
demonstrates the use of the
CompletesInstance
property on the
MemoryClear( ) operation of the
calculator.
Example 4-18. Using CompletesInstance to remove the state
[Serializable] [DurableService] class MyCalculator : ICalculator { double Memory {get;set;} public double Add(double number1,double number2) { return number1 + number2; } public void MemoryStore(double number) { Memory = number; } [DurableOperation(CompletesInstance = true)] public void MemoryClear( ) { Memory = 0; } //Rest of the implementation }
The problem with relying on
CompletesInstance is
that the context ID is immutable. This means that if the client tries to make additional
calls on the proxy after calling an operation for which
CompletesInstance is set to
true, all of
those calls will fail, since the store will no longer contain the instance ID. The
client must be aware, therefore, that it cannot continue to use the same proxy: if the
client wants to make further calls against the service, it must do so on a new proxy
that does not have an instance ID yet, and by doing so, the client will start a new
workflow. One way of enforcing this is to simply close the client program after
completing the workflow (or create a new proxy reference). Using the proxy definition of
Example 4-17, Example 4-19 shows how to manage the calculator
proxy after clearing the memory while seamlessly continuing to use the proxy.
Example 4-19. Resetting the proxy after completing a workflow
class CalculatorProgram { MyCalculatorClient m_Proxy; public CalculatorProgram( ) { Guid calculatorId = ContextManager.LoadInstanceId(Settings.Default.CalculatorIdFileName); m_Proxy = new MyCalculatorClient(calculatorId); } public void Add( ) { m_Proxy.Add(2,3); } public void MemoryClear( ) { m_Proxy.MemoryClear( ); ResetDurableSession(ref m_Proxy); } public void Close( ) { ContextManager.SaveInstanceId(m_Proxy.InstanceId, Settings.Default.CalculatorIdFileName); m_Proxy.Close( ); } void ResetDurableSession(ref MyCalculatorClient proxy) { ContextManager.SaveInstanceId(
Guid.Empty, Settings.Default.CalculatorIdFileName); Binding binding = proxy.Endpoint.Binding; EndpointAddress address = proxy.Endpoint.Address; proxy.Close( ); proxy = new MyCalculatorClient(binding,address); } }
Example 4-19 uses my
ContextManager helper class to load an instance ID and save
it to a file. The constructor of the client program creates a new proxy using the ID
found in the file. As shown in Example 4-15, if the file does not contain an instance ID,
LoadInstanceId( ) returns
Guid.Empty. My
ContextClientBase<T> is designed to expect an
empty GUID for the context ID: if an empty GUID is provided,
ContextClientBase<T> constructs itself without an instance ID, thus
ensuring a new workflow. After clearing the memory of the calculator, the client calls
the
ResetDurableSession( ) helper method.
ResetDurableSession( ) first saves an empty GUID to the
file, and then duplicates the existing proxy. It copies the old proxy's address and
binding, closes the old proxy, and sets the proxy reference to a new proxy constructed
using the same address and binding as the old one and with an implicit empty GUID for
the instance ID.
WCF offers a simple helper class for durable services called
DurableOperationContext:
public static class DurableOperationContext { public static void AbortInstance( ); public static void CompleteInstance( ); public static Guid InstanceId {get;} }
The
CompleteInstance( ) method lets the service
programmatically (instead of declaratively via the
DurableOperation attribute) complete the instance and remove the state from
the store once the call returns.
AbortInstance( ), on
the other hand, cancels any changes made to the store during the call, as if the
operation was never called. The
InstanceId property
is similar to
ContextManager.InstanceId.
While the
DurableService attribute instructs WCF
when to serialize and deserialize the instance, it does not say anything about where to
do so, or, for that matter, provide any information about the state storage. WCF
actually uses a bridge pattern in the form of a provider model, which lets you specify
the state store separately from the attribute. The attribute is thus decoupled from the
store, allowing you to rely on the automatic durable behavior for any compatible
storage.
If a service is configured with the
DurableService attribute, you must configure its host with a persistence
provider factory. The factory derives from the abstract class
PersistenceProviderFactory, and it creates a subclass of the abstract class
PersistenceProvider:
public abstract class PersistenceProviderFactory : CommunicationObject { protected PersistenceProviderFactory( ); public abstract PersistenceProvider CreateProvider(Guid id); } public abstract class PersistenceProvider : CommunicationObject { protected PersistenceProvider(Guid id); public Guid Id {get;} public abstract object Create(object instance,TimeSpan timeout); public abstract void Delete(object instance,TimeSpan timeout); public abstract object Load(TimeSpan timeout); public abstract object Update(object instance,TimeSpan timeout); //Additional members }
The most common way of specifying the persistence provider factory is to include it in the host config file as a service behavior, and to reference that behavior in the service definition:
<behaviors> <serviceBehaviors> <behavior name = "DurableService"> <persistenceProvider type = "...type...,...assembly ..." <!— Provider-specific parameters —> /> </behavior> </serviceBehaviors> </behaviors>
Once the host is configured with the persistence provider factory, WCF uses the
created
PersistenceProvider for every call to
serialize and deserialize the instance. If no persistence provider factory is specified,
WCF aborts creating the service host.
A nice way to demonstrate how to write a simple custom persistence provider is my
FilePersistenceProviderFactory, defined
as:
public class FilePersistenceProviderFactory : PersistenceProviderFactory { public FilePersistenceProviderFactory( ); public FilePersistenceProviderFactory(string fileName); public FilePersistenceProviderFactory(NameValueCollection parameters); } public class FilePersistenceProvider : PersistenceProvider { public FilePersistenceProvider(Guid id,string fileName); }
FilePersistenceProvider wraps my
FileInstanceStore<ID,T> class. The constructor of
FilePersistenceProviderFactory requires you to
specify the desired filename. If no filename is specified,
FilePersistenceProviderFactory defaults the filename to
Instances.bin.
The key for using a custom persistence factory in a config file is to define a
constructor that takes a
NameValueCollection of
parameters. These parameters are simple text-formatted pairs of the keys and values
specified in the provider factory behavior section in the config file. Virtually any
free-formed keys and values will work. For example, here's how to specify the
filename:
<behaviors> <serviceBehaviors> <behavior name = "Durable"> <persistenceProvider type = "FilePersistenceProviderFactory,ServiceModelEx"
fileName= "MyService.bin" /> </behavior> </serviceBehaviors> </behaviors>
The constructor can then use the
parameters
collection to access these parameters:
string fileName = parameters["fileName"];
WCF ships with a persistence provider, which stores the instance state in a
dedicated SQL Server table. After a default installation, the installation scripts for
the database are found under
C:\Windows\Microsoft.NET\Framework\v3.5\SQL\EN. Note that with
the WCF-provided SQL persistence provider you can only use SQL Server 2005 or SQL Server
2008 for state storage. The SQL provider comes in the form of
SqlPersistenceProviderFactory and
SqlPersistenceProvider, found in the
System.WorkflowServices assembly under the
System.ServiceModel.Persistence namespace.
All you need to do is specify the SQL provider factory and the connection string name:
<connectionStrings> <add name = "DurableServices" connectionString = "..." providerName = "System.Data.SqlClient" /> </connectionStrings> <behaviors> <serviceBehaviors> <behavior name = "Durable"> <persistenceProvider type = "System.ServiceModel.Persistence.SqlPersistenceProviderFactory, System.WorkflowServices,Version=3.5.0.0,Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName = "DurableServices" /> </behavior> </serviceBehaviors> </behaviors>
You can also instruct WCF to serialize the instances as text (instead of the default binary serialization), perhaps for diagnostics or analysis purposes:
<persistenceProvider type = "System.ServiceModel.Persistence.SqlPersistenceProviderFactory, System.WorkflowServices,Version=3.5.0.0,Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName = "DurableServices" serializeAsText = "true" />
No credit card required | https://www.oreilly.com/library/view/programming-wcf-services/9780596157210/ch04s07.html | CC-MAIN-2019-30 | en | refinedweb |
Anthony Veale' <veale at fafnir.dyndns.org> wrote: > > Hello, > > I'm a newbie to OOP and Python at the same time and I'm having a bit > of a design problem. I can see at least one way out of my problem > that isn't really OO, ... We'll see.... > but since I'm writing this program to learn > some OO programming, that's a bit of a cheat. However, it might > turn out to be the most appropriate solution. I just don't know. > > Let me be concrete here. I'm writing a program to edit the Opera 5 > for Linux bookmarks file. I designed some classes for Urls and > Folders and one for the bookmark file as a whole. The BookmarkFile > class contains instances of Folders and Urls. I would say you need to be more concrete yet. The level of detail that you've supplied is not congruent with the level of detail that concerns your problem of whether to subclass or augment Folder. I'll have to make some assumptions. I assume what you're doing is reading in an Opera bookmark file, building a internal data stucture of Folders and Urls, manipuating the Folders and Urls in your program, and then outputting a new file. I assume that the bookmarks are organized into a heirarchy, much like a filesystem. That is, Folders can contain Urls and other Folders. You say that BookmarkFile contains instances of Urls ans Folders--it seems to me that BookmarkFile could or should be a subclass of Folder. It seems that it does everything a Folder does (i.e. contains Urls and other Folders, maybe has a name and description), but also has additional input/output facilities. In fact, if I we're doing this, I don't think I would even have a separate BookmarkFile class; I'd just use a Folder for it. > Then when I started to design the editing program, I found that > there are some features that I wanted to add to Folders, that are > really only applicable to Folders when editing the bookmark file. Well, what's confusing here is that you said you're writing a program to edit bookmarks. When, then, are you not editing the bookmark file? It would be helpful to know exactly what these operations are. I'll assume that you have "editable" and "non-editable" Folders. What you have to ask yourself is this: Is editabilty a temporal property or a spatial property? In other words, can a single folder be both editable and non-editable depending on where and when it's used? Or are certain folders permanently non-editable while others are permenantly editable? In the former (temporal) case, you almost certianly want to just add features to the class. In the latter (spatial) case, you might want to subclass Folder. (Or, better, extract a common parent class.) > So I'd like to subclass Folders and make sure that the BookmarkFile > class creates Folders using the new subclass. > > The questions: > Should I try to redesign the BookmarkFile class to use a > parameter to control the actual class of Folder instance? (Can > you actually pass a class as a parameter?) This question makes me think it's a spatial property. Yes, classes are regular Python objects. You can do something like this: def add_folder (folder_class, name): self.folder_list.append (folder_class(name)) And call it like this: bookmarkfile.add_folder (EditableFolder, "Anthony's Bookmarks") But, even if you couldn't, you could use still use if ... elseif ... else ... endif logic to choose the class. > Or: > Should I try to redesign the BookmarkFile class to be easier > to subclass and use subclassing to control the class of Folder > instances? This question make me think it's a temporal property. It seems that editability of folders is a property not of the folders themselves, but of the BookmarkFile. It also seems like you're accessing your folders through the BookmarkFile class. > Or: > Should I try to isolate these "editing" functions into something > that gets mixed with the Folder class? That sounds terrible. I'd run away screaming. > Or: > Is there some other approach that I simply haven't thought of? One point of confusion I've often experienced in containment relationships (one object contains another) is whether an opertation that involves both container and containee (or, if you will, parent and child) should be part of the parent class or the child class? Based on my experience, these functions should always be part of the parent. So, if these "editing" functions of which you speak are of that sort (like, "move_to_another_folder"), then you should think about whether these functions properly belong to BookmarkFile. > The cheat would be to just add the features to the Folder class > itself without subclassing. It would work, but doesn't really seem to > be the appropriate OO action to take. ... Given my best guess of exactly what your problem is, I'd say this is in fact the best thing to do, and it's not really counter to OO.... > If I were writing this program > in order to get something functional, I'd probably just do it. But > I'm trying to learn the "right way" here. This could be your problem, too. Sometimes the Right Way (TM) isn't the right way. CARL | https://mail.python.org/pipermail/python-list/2001-June/073669.html | CC-MAIN-2019-30 | en | refinedweb |
react-bootstrap-date-time-picker is built on react-bootstrap and moment, so make sure that you have react-bootstrap and moment in your modules. Lets see some code.
Install
npm install react-bootstrap-date-time-picker --save
Import
import DateTimePicker from 'react-bootstrap-date-time-picker';
Main Features
- Can be used as simple date picker
- Can be used as simple time picker
- Can be used as simple date-time picker
- Calendar placement
- Restricting date-time range
Properties
Here are some important properties of DateTimePicker
- defaultValue - string - default input value
- value - string - input value
- onChange - callback function on change of input value
- onClear - callback function on clear of input value
- onBlur - callback function on blur of input
- onFocus - callback function on focus of input
- autoFocus - bool - setting focus automatically
- disabled - bool - disabling input value
- showClearButton - bool - showing clear button
- calendarPlacement - string - calendar placement, ex: "top", "bottom"
- dateFormat - string- input format, ex: "MM/DD/YYYY HH:mm:ss"
- showTodayButton - bool - showing To Day button calendar ,
- todayButtonLabel - string - To Day button text
- from - Date - lowest possible date for selection
- to - Date - highest possible date for selection
- calendarOnly - bool - If you want to display as only calendar
- timeOnly - bool - If you want to display as only time
Limit Range
The default range is 20 years (10 years back from current date to 10 years after from current date). You can give your own range by passing from and to values. Observe below code
<DateTimePicker from={new Date("2017-03-15T14:28:06+05:30")} to={new Date("2017-03-30T14:28:06+05:30")} onChange={this.handleChange} value={this.state.date} />
Date Format
Here this library uses momentjs for formatting the date string, So just follow the same formate specifications. Here are some useful formats
- DD/MM/YYYY - It will show just date
- DD/MM/YYYY HH:mm:ss - It will show date and time 24 hours format
- DD/MM/YYYY hh:mm:ss A - It will show date and time 12 hours format
<DateTimePicker dateFormat="DD/MM/YYYY hh:mm:ss A" />
Display Calendar only
Use calendarOnly property to display calendar only and hiding time
<DateTimePicker calendarOnly={true} />
Display Time Only
Use timeOnly property to display time only
<DateTimePicker timeOnly={true} />
Show or hide clear button
Use showClearButton property to display or hide clear button. Here if you should pass dateFormat, from and to
<DateTimePicker showClearButton={false} dateFormat="HH:hh:ss" from={fromValue} to={toValue} />
Very good to read and experiment with ReactJS. You have explained in very easy manner creating Date and Time Picker. Thanks for the write up.
Best Regards,
Best ReactJS Training Institute in Hyderabad
our resume editing service will help you when writing a resume. With us you can get a good resume that will help you get a job
This comment has been removed by the author.
Wow, great post/ I loved your explanation very mush. It's easy and simple. Yet well, it's full and I don't need to find more extra information. Great job!On the rating of Samedayessay.com your tutorial definitely would be in the top!
Make sure to have react-bootstrap and moment in modules.
showbox online for PC
I appreciate it!. I really like it when people get together and share ideas. Great website continue the good work!.
subway surfers online
GBWhatsapp apk Download Latest Version 2018. Download Latest GB Whatsapp for Use 2 Whatsapp in One Mobile. GBWhatsApp APK
Use dual account with GBwhatsapp Latest Application With free of cost. Gb whatsapp Apk
Catch all movies and shows in Morpheus TV App without any problems in streaming.
Use dual account with GBwhatsapp Latest 7.90 Application With free of cost.
The post is written in very a good manner and it entails many useful information for me. I am happy to find your distinguished way of writing the post. Now you make it easy for me to understand and implement the concept.
ReactJS Online Training
Tiap-tiap kemenangan yang telah anda dapatkan kelak temukan anda kerjakan penarikan dana yang berupa uang asli yang dapat setiap saat anda carikan
asikqq
dewaqq
sumoqq
interqq
pionpoker
bandar ceme terpercaya
hobiqq
paito warna
forum prediks
frpbypass apk
Lucky Patcher Download For Android gives you unlimited in-game resources & removes ads from any type of android application.
GBWhatsApp Apk:
GB WhatsApp is the best version of Official WhatsApp. | http://blog.sodhanalibrary.com/2017/04/react-bootstrap-date-time-picker-date.html | CC-MAIN-2019-30 | en | refinedweb |
Add a no_file_caps boot option when file capabilities are compiled into the kernel (CONFIG_SECURITY_FILE_CAPABILITIES=y).. Signed-off-by: Serge Hallyn <serue@us.ibm.com> --- include/linux/capability.h | 4 ++++ kernel/capability.c | 11 +++++++++++ security/commoncap.c | 9 +++++++++ 3 files changed, 24 insertions(+), 0 deletions(-) diff --git a/include/linux/capability.h b/include/linux/capability.h index 9d1fe30..c96c455 100644 --- a/include/linux/capability.h +++ b/include/linux/capability.h @@ -359,6 +359,10 @@ typedef struct kernel_cap_struct { #ifdef __KERNEL__ +#ifdef CONFIG_SECURITY_FILE_CAPABILITIES +extern int file_caps_enabled; +#endif + /* * Internal kernel functions only */ diff --git a/kernel/capability.c b/kernel/capability.c index index e4c4b3f..e33f632 100644 --- a/security/commoncap.c +++ b/security/commoncap.c @@ -27,6 +27,10 @@ #include <linux/prctl.h> #include <linux/securebits.h> +#ifndef CONFIG_SECURITY_FILE_CAPABILITIES +static const int file_caps_enabled; +#endif + int cap_netlink_send(struct sock *sk, struct sk_buff *skb) { NETLINK_CB(skb).eff_cap = current->cap_effective; @@ -279,6 +283; -- 1.5.4.3
Linux is a registered trademark of Linus Torvalds | https://lwn.net/Articles/296391/ | CC-MAIN-2018-05 | en | refinedweb |
WAIS and other large documents services - BOF Steve Hardcastle-Kille, chair IETF San Diego, evening, March 18, 1992 Purpose: to discuss information services that seem to becoming popular enough to become "standards." Consider: WWW, WAIS, DS (X.500) Relationships between: documents, objects, and directory entries UDI: Need, Form, X.500 Need for whom (see Steve H-K slide) John Curran (BBN) WAIS: an implementation of Z39.50. Architecture from users point of view: -Servers: source for a collection of documents, indexed in some way. -User: can send queries to servers. All documents in in a server indexed by all words in each document. Returns bibliographic and other info. including a handle for retrieving. Provides searching and retrieval all using Z39.50. - Server can serve more than one source.. Servers use native file system for documents. Don't need to duplicate files. - All "things" are considered documents, regardless of format or content - Can query a server to find out which sources it provides. TMC also has a source of sources. Source descriptions might be better off somewhere else, such as X.500. Differences between Z39.50 and WAIS: Z39.50 is very general, like about form of data, indices, specific form of queries. WAIS essentially uses Z39.50 as a transport. Brewster would actually say that WAIS is the protocol - extensions to Z39.50 - want to merge them. There are 2 indexing models - public and private (need CM to use it). Has relevance feedback. Can attach a particularly relevant to a future query, using all words in document as part of query. Can add new routines to index on new types of objects. Currently view everything as text documents. Wengyik Yeong (PSI): Representing new kinds of objects in X.500 Have presently added RFCs (documents), have 2 document series (RFCs and FYIs). Now want to move on to archives (OSI-DS 22 - describes archives in X.500). Model is that each archive is a file. Not always true. Sometimes each source is a separate file. Experience: * Need more sophisticated approach * Need to custom objects - least common denominator not the best (eg language, size of binary, machine, etc. - not things that one will find) * More documentation info would be helpful. * Flat organization not very good. * Need more sophisticated experiments - used only two. Tim Bernersr-Lee (World Wide Web - Cern) Hypertext like model: simple uniform interface. All are subsets of hypertext. The problem is searching in the hypertext model. Use WAIS or something else for searching - comes back with a hypertext document. Architecture: client server. Client machine which knows lots of protocols for going out over the network (FTP, Prospero, home-brew,(HTTP) etc.) Addressing scheme: this is a reference. Also need common formats. Servers Gateways to other worlds such as WAIS, VMS help files. To other kinds of servers. HTTP: Runs on TCP, send query, get response. Wnat to extend to sending authentication, perhaps profile of client so can know what the client can display. HTML: mockup language for sending back hypertext, also very simple User interfaces: for non mouse users tag things with numbers that they can type. Have problem of multiple indices. To fast run through. More support for interfaces than for setting up servers. How does it fit into everything else? X.500: need to be able to refer to anything - needs universal document identifiers (currently use address, but wrong - might move) Could use DNS,, but no further work on it Resovlability Lasting value Cover current situation Relevance openness uniqueness readability structure: 3-parts: eg. protocol, host, port consensus Could get to information (objects as above) from X.500. WAIS vs. WWW vs Gopher WWW data model: document, text, or hypertext, open addressing (can always add more components) Gopher: file or menu, open addressing, very simple server, large deployment, indexes WAIS: relevance feedback restricted to a single server, source file contains organization, indexing, each source is a closed world. Gopher, WWW, Prospero, pointers can go back and forth and all over the place. Question or comment: concern about being to jump or charge - people might like to peer over the edge before jumping, either because may be hard to get back and to understand cost of jumping. Code is available to "collaborators" - anyone who uses it or writes code. timbl@info.cern.ch SLAC, Fermi Lab, etc, really for high energy physicists. Steve Hardcastle-Kille (Directory issues) OSI-DS 25 Directories in the real world Global naming: benefits * labelling * express relationships in names * Listing services in the directory. In the broadest sense bringing things together.. Might use for yellow pages, multiple provides for similar things. Might use it for localizing activity. Listings in one place might lead to listing in others. * Browsing through X.500 to an external listing service, such as WWW or WAIS. * Hierarchy - rigid, but can overlay multiple hierarchies. * Pointers - alias (forward pointer across the hierarhcy) and "see also" * Use to model groups as objects with components. Can parts of the hierarchy (DSA's) really be something else besides X.500. Might be WWW or WAIS, etc. Paul Barker (?), UCL project: (just starting up, trying to push the forefront) 3 foci (did I miss something here - I have only 2) * gray literature - unpublished, research documents. Not systematically available. Store this stuff in the directory. Question of how to organize, where to hang them - - off individuals, docs for dept, docs for institution, etc. Experiment in putting documents in the thing. * (funded by British Library) Want to take Mark records of library and model them in X.500. One issue is that LOTS of attributes. (Issue - there is no one standard for Mark records.) * Librarians are especially interested in looking for strings, queries. Question of whether "The Directory" can contain orders of magnitude more objects and bigger objects that hertofore. Cliff Neuman (Prospero) How relates to others (non-X.500) Goal mechanism for organizing information, follows filesystem model rather than hypertext is in W3. Causes multiple queries, therefore have to be fast. Directory service with references to other directories or files. Does not deal with retrieval (FTP, Andrew, NFS, currently adding WAIS, will add HTTP). Prospero views a query as a directory, and response is a file. Prospero and X.500: can use X.500 to translate soft names to things to put into Prospero query. Real problem is a single global naming scheme. Generally organized by owner, authority, not necessarily organized by topics. Real problem is what the topics should be and what should be in them. Believes in multiple name spaces. People can have own, but typically will start with either a copy of or a link to another one. Need shortcuts, so user doesn't have to construct all the detail of a namespace. Prospero allows you to glue together parts of other directories, called filters. There are canned ones, but users can build their own. Closure: (namespace, object) this is how to pass names. Namespaces really have addresses that are global, and not used by the user. On the other hand each user can have his/her own name for any particular namespace. info-prospero@isi.edu Larry Masinter, Xerox, System 33 * Document handle: uninterpreted, max 32 byte id that every doc has. Truly only a content identifier. (A substring of this is used to find the document, but hidden from users.) * file location: protocol, host, path, offset, format, timeout * description * document: a thing that has a handle. A lot of the work was in conversion of formats. Also time on access control - per document ACLs. Made them part of the description. Multiple protocols was a problem because not all machines had the same protocols. Done by a gateway. Normalizing attribute-vallue space would cause there to be none - LOTS of different kinds of documents. Some are lit, and library docs, but others might be quotes, job applications, references, financial reports, etc. Some properties actually require computation. Tim back again W3 document = Prospero directory = menu All based on an address W3 has an all inclusive model, but only 2 global namesspaces (DNS and X.500, but DNS is no longer being extended, so the only one is X.500). Peter Deutsch: equivalence. Question of two udi's or pointers to one document. Also question of exact duplicates with separate udi's. Larry Masinter believes it is ok to have a timestamp in it. | http://www.w3.org/Conferences/IETF92/WWX_BOF_Sollins.html | CC-MAIN-2018-05 | en | refinedweb |
| Submissions
W3C is pleased to receive a submission in the field of 2D vector graphics. There is a growing interest in representing graphics on the Web in a way which is scalable and structured, rather than as raster images. The Web Schematics submission is an interesting proposal in this direction. It builds on the experience gained with the drawing tools associated with different types of document production systems, and with relevant ISO 2D graphics standards, and proposes a small set of graphical primitives that are hierarchicaly structured.
There is also growing interest in a vector graphics format expressed in XML. This would permit customisation with style sheets, allow simple schematics to be written by hand, and (with XML namespaces) permit the intermixing of text and graphics in a single document. Experiments have already been done with the W3C testbed browser and authoring tool, Amaya, which have demonstrated the feasibility of such an approach. Because the Web Schematics submission is an application of SGML, it would be easy to re-express it as an XML application.: If enough interest is shown for a new markup language representing 2D vector graphics, W3C plans to put together a briefing package proposing a new Working Group, in the Graphics Activity, on this topic.
Should a new Working Group be created, the Web Schematics submission would be placed on its agenda as one input towards the creation of an XML vector graphics format. The group would be expected to ensure that this format treated hypertext links as first class objects - links within, into, and out of the graphic should be supported - and further to develop the integration with style sheets.. | http://www.w3.org/Submission/1998/05/Comment | CC-MAIN-2018-05 | en | refinedweb |
0
I made some simple mistake (conversion?) in my program.. I will be gratefull if anyone has an idea what's wrong with it;) The problem is not with the algorithm itself so I am not explaining it - the problem is: why the condition placed in the code between // HERE and // HERE seems to be never true? (The function does not work correctly) Try this piece of code with 0.7 and 4 on entry.
int s(int k) // factorial for integers { if (k==0 || k==1) {return 1;} else {return (s(k-1))*k;} } double newtonsymbol(double r, int k) { if (k<0) return 0; int s(int k); //that's factorial above static double t=r; double h; h=r+k-1.0; static double numerator=r; double denominator; std::cout<<h-t<<" "<<t<<std::endl; /* that's optional; it shows the mistake; the condition would be fulfilled if h-t==0 */ //HERE if (t==h) //HERE {denominator=s(k); std::cout<<"ok"<<std::endl; return (numerator/denominator);} if (h<-10) {return 3;} // that is added to stop the function so you can see what's going on {std::cout<<"not ok"<<std::endl; numerator=r*newtonsymbol(r-1,k));} } int main () { double p=0.7; int l=4; std::cout<<newtonsymbol(p,l); system("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/167046/mistake-in-simple-math-function | CC-MAIN-2018-05 | en | refinedweb |
Introduction
PDB
PGS
IntroductionPyPact provides access to PACT from Python. This is provided as a package with two modules. The package is pact and the modules are pdb and pgs. The pdb module provides access to Score, PML and PDBLib. The pgs module provides access to Score, PML, PDBLib and PGS. Only one module should be used at a time. No access is provided to Scheme, PPC or Panacea at this time.
TODO
If you are using a PACT distribution:
(The user needs to know how to build the shared objects.)
If you see the error message:
ImportError: No module named _pdb
then you need to build the shared objects.
END OF TODO
If you are using your own private version of PACT built from a PACT distribution, e.g. in ~/pact, then you can do the following in your Python code:
Exampleimport os import sys site_packages = os.path.expanduser( "~/pact/python" ) if os.path.isdir( site_packages ) : sys.path.append( site_packages ) else : print "No PACT python support on this platform." sys.exit( -1 ) import pact.pdb as pdb
Putting the modules in a package helps keep the namespace clean and avoids confusion with the other pdb module - the Python Debugger.
To import the library:import pact.pdb as pdbTo import the PDB library AND use the Python debugger:
or
import pact.pgs as pgsimport pact.pdb
import pdb
Recall that the PACT tools fit in the following hierarchy:
ULTRA SX
PANACEA
PGS SCHEME PPC
PDB
PML
SCORE
Score and PML are not provided as stand alone modules since PDB is the first level that Python users are usually interested in using. Much of Score functionality such as memory allocation, hash tables and association lists are accessable from Python. However the module also depends on PDBLib's functionality to store arbitrary data in Score's hash tables and association lists. In the C library this is implemented by associating a type with each haelem or pcons. In PyPact the type must be defined in a PDB file. PyPact uses a virtual internal file to store the type information. It is accessed as the module variable vif. It is also the default file argument for many methods.
Many structures in PACT are represented as classes in PyPact. Most structures have a function in the C API that returns a pointer to a new structure and several support functions that receive the structure pointer as one of the arguments (usually the first). This maps directly to a constructor and methods.
The hash table functions follow the typical pattern.
C Bindings:
With typical usage:With typical usage:
- hasharr *SC_hash_alloc()
- int SC_hasharr_install(hasharr *self, char *type, void *key, void *obj)
- void SC_free_hasharr(hasharr *self)int ierr, ival1; char one[] = "one"; hasharr *ht; ht = SC_hash_alloc(); ival1 = 1; iok = SC_hasharr_install(ht, "int", "one", &ival1); if (iok == 0) { errproc("Error inserting one\n";); } SC_free_hasharr(ht);In PyPact SC_make_hasharr is replaced by the hasharr class constructor and SC_hasharr_install becomes a method of the instance. SC_hash_free is called by Python's garbage collector when there are no more references to the instance.import pact.pdb as pdb ht = pdb.hasharr() ht.install("one", 1)PACT errors are trapped by raising Python exceptions.
Where natural, types also use standard protocols. For example, both hash tables and association lists use the mapping protocal.ht = pdb.hasharr() ht["one"] = 1
PDB
This section focuses on how to use the PDB module.
The pdbdata ObjectThe pdbdata object is the only object in PyPact that does not map directly to a structure in PACT. It is used to fully describe a piece of memory and acts as a middle-man between the C and Python type systems. It consists of a pointer to memory and sufficient information to describe the memory. This includes the type and dimensions. This also includes a pointer to a PDBfile that is used to look up the type. With this information Python is able to access the memory as the user expects including accessing members of a derived type as attributes of an object. Python provides int and float objects. The Python implementation uses a C long for int and and C double for float. The pdbdata constructor allows the default types to be overridden. pdbdata(1, 'int') Since Python does not support pointers directly, a sequence of Python objects will match several C declarations. [ 1, 2, 3, 4, 5] can be both int[5] and int *. In one case, it is clear that there are 5 integers (int[5]). In the other, we only know that it points to an int. By default, a pdbdata object will be explicit about lengths and use int[5]. However, if it's necessary to match a specific type, then int * can be specified. The pdbdata type requires that pointers be allocated by Pact. This allows it to find out how much memory is allocated which allows a pdbdata to be converted back into Python objects. Nested sequences are represented as multiply dimensioned types. ([[1,2,3],[4,5,6]] is long[2][3]. If the array is not regular, then it is treated as an array of pointers. [[1,2,3],[4,5]] is long *[2]. It is also possible to convert Python class instances into pdbdata object. See the register_class method of PDBfile object below. None is treated as NULL and results in a pointer type. No type at all defaults to int [None] is int *[1]
Constructorpdbdata(type, data, file)data -- data of type type to be encapsulated.
type -- string name of the data type of the pdbdata.
file -- define the types to this PDBfile. (This argument is optional and the virtual internal file is used by default.)
Module Methods
These module methods take a pdbdata object and return the individual fields.
In addition to using the constructor, a pdbdata can be returned by any method that returns user data; for example, pdbfile.read.In addition to using the constructor, a pdbdata can be returned by any method that returns user data; for example, pdbfile.read.
- getdefstr(obj) - returns a defstr object.
- gettype(obj) - returns a string.
- getfile(obj) - returns a PDBfile object.
- getdata(obj) - returns a CObject object.
It is controlled by the setform method.setform(array, struct, scalar)With no arguments the current values of setform are returned as a tuple. array, struct and scalar should be one of the following values, and reflect how reads and writes handle arrays, structs, and scalar data, respectively.
Not all constants are valid for all forms.Not all constants are valid for all forms.
- AS_NONE - Keep existing value.
- AS_PDBDATA - return as a pdbdata
- AS_OBJECT - return as object
- AS_TUPLE - return as a tuple
- AS_LIST - return as a list
- AS_DICT - return as a dictionary
- AS_ARRAY - return as a numpy array
unpack(data, array, struct, scalar)Unpack a pdbdata object into more Python friendly types.
If pdbdata represents a structure then the members can be accessed as attributes of object.
If pdbdata represents an array then the sequence protocol is supported.
Examples>>> import pact.pdb as pdb >>> d = pdb.pdbdata(4.0, 'double') >>> d data = 4.0000000000000000e+00 >>> r = pdb.unpack(d) >>> type(r) <type 'float'> >>> r 4.0 >>> d = pdb.pdbdata((4.0, 5.0), 'double[2]') >>> d data(0) = 4.0000000000000000e+00 data(1) = 5.0000000000000000e+00 >>> r = pdb.unpack(d) >>> type(r) <type 'list'> >>> r [4.0, 5.0] >>> pdb.setform(array=pdb.AS_TUPLE) (3, 3, 2) >>> r = pdb.unpack(d) >>> type(r) <type 'tuple'> >>> r (4.0, 5.0) >>> d = pdb.pdbdata([None, [4., 5.]], 'double **') >>> d data(0) = (nil) data(1)(0) = 4.0000000000000000e+00 data(1)(1) = 5.0000000000000000e+00 >>> r = pdb.unpack(d) >>> r [None, [4.0, 5.0]] >>> print r[0] None >>> print r[1] [4.0, 5.0]
File ObjectThere are a few module variables that are used to access files. files is a dictionary of opened PDBfile instances indexed by file name. It is used to store references to files to avoid garbage collection until the close method is explicitly called. vif is the virtual internal file. It is used as the default file for many other methods.
PDBfile(name[, mode]) name is the name of the file to open/create. mode is the file mode - r read, w write, a append The default is r. open is an alias for PDBfile.
Attributes
- object
- name
- type
- symtab
- chart
- host_chart
- attrtab
- previous_file
- date
- mode
- default_offset
- virtual_internal
- system_version
- major_order
MethodsThe ind argument is used to index arrays. If it is a scalar, it must be the number of items. If it is a sequence, then each member applies to a dimension. If a scalar, then it is the number of items. If length is 1, it is interpreted as (upper, ). If length is 2, it is interpreted as (lower, upper). If length is 3, it is interpreted as (lower, upper, stride).
The following examples use f90 array notation.When writing data, PyPact will try to determine the type of the data if outtype is not specified. Python reals map to doubles. strings map to char *
- flush()
- write(name, var[, outtype, ind])
- name is the name of the variable to write.
- var is the variable object to write.
- outtype optional, is the type of variable.
- ind optional, indexing information.
- write_raw(name, var, type[, ind])
- name is the name of the variable to write.
- var is the variable object to write. It must support the buffer interface.
- type is the type of variable.
- ind optional, indexing information.
- read(name[, intype, ind])
- name is the name of the variable to read.
- intype optional, type of data desired.
- ind optional, indexing information.
- defent(name, type)
- name name of variable.
- type type of data in file.
- defstr(name, members)
- name name of new type
- members optional, sequence of types. If members are defined a new type is created. Otherwise name is looked up and returned.
- cd(dirname)
- dirname Name of directory.
- mkdir(dirname)
- dirname Name of directory.
- ln(var, link)
- var
- link
- ls([path, type])
- path optional
- type optional
- pwd()
- register(cls, type [, ctor])
- cls The Python Class object
- type The name of the pdblib defstr.
- ctor This function is used during read. It accepts a dictionary of structure members and returns an object of class cls.
ExamplesOpen a file.Write four doubles to a files as a 2-d double array and as a 1-d float arrayWrite four doubles to a files as a 2-d double array and as a 1-d float array>>> import pact.pdb as pdb >>> fp = pdb.open("xxxx", "w") >>> type(fp) <type 'PDBfile'> >>> pdb.files {'xxxx': <PDBfile object at 0xb3f7bb40>} >>> fp.close() >>> pdb.files {} >>> ctrl-DWrite a class instanceWrite a class instance%python >>> import pact.pdb as pdb >>> fp = pdb.open("xxxx", "w") >>> ref = [2.0, 3.0, 4.0, 5.0] >>> fp.write("d2", ref, ind=[2,2]) >>> fp.write("d3", ref, "float") >>> fp.close() >>> ctrl-D %pdbview xxxx PDBView 2.0 - 11.22.04 -> ls d2 d3 -> d2 /d2(0,0) = 2.0000000000000000e+00 /d2(0,1) = 3.0000000000000000e+00 /d2(1,0) = 4.0000000000000000e+00 /d2(1,1) = 5.0000000000000000e+00 -> d3 /d3(0) = 2.0000000e+00 /d3(1) = 3.0000000e+00 /d3(2) = 4.0000000e+00 /d3(3) = 5.0000000e+00 -> desc d2 Name: d2 Type: double Dimensions: (0:1, 0:1) Length: 4 Address: 108 -> desc d3 Name: d3 Type: float Dimensions: (0:3) Length: 4 Address: 140 -> quit%cat user.py class User: def __init__(self, a, b, c): self.a = a self.b = b self.c = c def __repr__(self): return 'User(%(a)d, %(b)d, %(c)f)' % self.__dict__ def makeUser(dict): return User(dict['a'], dict['b'], dict['c']) fp = pdb.open("user.pdb", "w") fp.defstr( "user", ("int a", "int b", "float c")) fp.register_class(User, "user", makeUser) v1 = User(1,2,3) fp.write("var1", v1) v2 = fp.read("var1") fp.close() print "v1 =", v1 print "v2 =", v2 %python user.py v1 = User(1, 2, 3.000000) v2 = User(1, 2, 3.000000) %pdbview user.pdb -> var1 /var1.a = 1 /var1.b = 2 /var1.c = 3.0000000e+00
Defstr ObjectA defstr type is used to define and create structures. defstr(name, members[, file])
The returned instance is also a constructor for instances of a defstr.The returned instance is also a constructor for instances of a defstr.
- name name for defstr.
- members sequence of type names.
- file PDBfile instance. optional defaults to vif
Attributes
- dp
- type
- size_bits
- size
- alignment
- n_indirects
- is_indirect
- convert
- onescmp
- unsgned
- order_flag
MethodsA defstr supports the mapping protocol plus some methods usually associated with mappings.
- has_key Not Implemented
- items Not Implemented
- keys
- values Not Implemented
- get Not Implemented
Examples>>> import pact.pdb as pdb >>> d = pdb.defstr('struct', ('int i', 'float j')) >>> type(d)
>>> print d Type: struct Alignment: 4 Members: {int i; float j;} Size in bytes: 8 >>> a = d((3,4)) >>> a data.i = 3 data.j = 4.0000000e+00 >>> a.i 3 >>> a.j 4.0 >>> a.i = 5 >>> a.j = 6 >>> a data.i = 5 data.j = 6.0000000e+00 ----------------- >>> import pact.pdb as pdb >>> d = pdb.defstr('struct', ('int i', 'float j')) >>> a = d((3,4)) >>> fp = pdb.open('yyyy', 'w') >>> fp.defstr('struct', d) Type: struct Alignment: 4 Members: {int i; float j;} Size in bytes: 8 >>> fp.write('aaa', a) >>> fp.close() >>> ctrl-D % pdbview yyyy PDBView 2.0 - 11.22.04 -> ls aaa -> aaa /aaa(1).i = 3 /aaa(1).j = 4.0000000e+00 -> desc aaa Name: aaa Type: struct Dimensions: (1:1) Length: 1 Address: 108 -> struct struct Type: struct Alignment: 4 Members: {int i; float j;} Size in bytes: 8
Memdes Object
Attributes
- desc
- member
- cast_memb
- cast_offs
- is_indirect
- type
- base_type
- name
- number
MethodsNone
Examples
SCOREhasharr and assoc both seem very similar at the Python level. Both implement the mapping protocol. The chief difference is in the data structures they build in memory. This allows the user to build the type of data structure required by PACT. For example, many graphics routines take or return an association list to describe plotting options. The application can treat the association list in a Pythonic manner by treating it as a dictionary.
Memory AllocationPyPact provides a way to allocate and check memory using PACT's memory allocation routines. The void * pointer is contained in a CObject. There is no PyPact specific type/class for memory. All methods are module methods, not class methods. The CObject is a convient way to pass C pointers around and is generally only useful to methods that know what to do with it.
AttributesNone
Methods
- zero_space(flag)
- alloc(nitems, bytepitem, name)
- realloc(p, nitems, bytepitem)
- sfree(p)
- mem_print(p)
- mem_trace()
- reg_mem(p, length, name)
- dereg_mem(p)
- mem_lookup(p)
- mem_monitor(old, lev, id)
- mem_chk(type)
- is_score_ptr(p)
- arrlen(p)
- mark(p, n)
- ref_count(p)
- set_count(p, n)
- permanent(p)
- arrtype(p, type)
Examples>>> p = pdb.alloc(2, 8, "array1") >>> type(p) <type 'PyCObject'> >>> pdb.arrlen(p) 16
Hash Table
AttributesNone
Methods
- install(key, obj, type)
- def_lookup(key)
- clear()
- has_key(key)
- items - not implemented
- keys()
- update(dict)
- values - not implemented
- get - not implemented
Examples>>> ht = pdb.hasharr() >>> ht["one"] = 1 >>> ht.keys() ('one',) >>> ht["one"] 1 >>> pdb.vif.chart.keys() ('defstr', 'syment', 'symindir', 'symblock', 'memdes', 'dimdes', 'hasharr', 'haelem', 'PM_mapping', 'PM_mesh_topology', 'PM_set', 'PG_image', 'pcons', 'SC_array', 'Directory', 'function', 'REAL', 'double', 'float', 'u_long_long', 'long_long', 'u_long', 'long', 'u_integer', 'integer', 'u_int', 'int', 'u_short', 'short', 'u_char', 'char', '*') >>> pdb.vif.symtab.keys() ('/', '/&ptrs/')
Association List
AttributesNone
Methods
- clear - not implemented
- has_key(key)
- items()
- keys()
- update(dict)
- values - not implemented
- get - not implemented
Examples
PMLTODO
Mapping Object
Attributes
Methods
Examples
Set Object
Attributes
Methods
Examples
Field Object
Attributes
Methods
Examples
Mesh Topology Object
Attributes
Methods
Examples
PGS
APIPyPact has some user callable routines to allow the developer to define their own types to PyPact. This will allow PyPact to return an instance of the correct class when reading from a file.
typedef int (*PP_pack_func) (void *vr, PyObject *obj, long nitems, int tc) typedef PyObject *(*PP_unpack_func) (void *vr, long nitems) typedef PP_descr *(*PP_get_descr)(PP_file *fileinfo, PyObject *obj) PP_descr *PP_make_descr( PP_types typecode, char *type, long bpi ) PP_type_map *PP_make_type_entry( PP_types typecode, int sequence, PP_descr *descr, PyTypeObject *ob_type, PP_pack_func pack, PP_unpack_func unpack, PP_get_descr get_descr ) void PP_register_type(PP_file *fileinfo, PP_type_entry *entry) void PP_register_object(PP_file *fileinfo, PP_type_entry *entry)
Developer NotesThis section focuses on areas that developers using PyPact and developers of PyPact might be interested in.
Generating SourceMuch of the source code for PyPact is generated using the modulator tool from Basis. This tool generates the boiler plate that the Python API requires to connect functions and data structures together into an extension module. This tool generates new style classes. The input consists of an IDL file (interface definition file). It also reads any previously generated code to preserve changes made to certain sections of the generated source. These sections are contained between DO-NOT-DELETE splicer.begin and DO-NOT-DELETE splicer.end comments.
Each generated file starts with the comment This is generated code.
InstallationPyPact will be installed by dsys if shared libraries are defined. PACT's autoconf/automake system will also build the modules. Finally, a setup.py script is provided to build the module.
The default method of building the extension will load the PACT libraries into the extension. This allows things to work as expected when importing PyPact from the Python executable. If, instead, Python is imbedded into an application which already has the PACT libraries loaded, then importing PyPact will result in two copies of PACT being in memory. This is not a Good Thing. In this case, the application developers will have to rebuild PyPact without loading the PACT libraries. One way to accomplish this is by editing the setup.py script to remove the libraries argument from the Extension constructor.
User Defined ClassesTODO
For questions and comments, please contact the PACT Development Team.
Last Updated: 03/03/2007 | https://wci.llnl.gov/codes/pact/pypact.html | CC-MAIN-2018-05 | en | refinedweb |
On 12.10.16 09:31, Nathaniel Smith wrote:
But amortized O(1) deletes from the front of bytearray are totally different, and more like amortized O(1) appends to list: there are important use cases[1] that simply cannot be implemented without some feature like this, and putting the implementation inside bytearray is straightforward, deterministic, and more efficiently than hacking together something on top. Python should just guarantee it, IMO.
Advertising-n [1] My use case is parsing HTTP out of a receive buffer. If deleting the first k bytes of an N byte buffer is O(N), then not only does parsing becomes O(N^2) in the worst case, but it's the sort of O(N^2) that random untrusted network clients can trigger at will to DoS your server.
Deleting from buffer can be avoided if pass the starting index together with the buffer. For example:Deleting from buffer can be avoided if pass the starting index together with the buffer. For example:
def read_line(buf: bytes, start: int) -> (bytes, int): try: end = buf.index(b'\r\n', start) except ValueError: return b'', start return buf[start:end], end+2 _______________________________________________ Python-Dev mailing list Python-Dev@python.org Unsubscribe: | https://www.mail-archive.com/python-dev@python.org/msg94364.html | CC-MAIN-2018-05 | en | refinedweb |
How to: Modernized AngularJS 1.5+ with ES6, Webpack, Mocha, SASS, and Components
There are many reasons why you might want to keep working with AngularJS 1.x — I will simply assume you have your reasons.
Angular ≠ AngularJS. This site and all of its contents are referring to AngularJS (version 1.x), if you are looking for the latest Angular, please visit angular.io — angularjs.org
For new projects, I would recommend using React because this is where the momentum is in front-end development.
Or at least, this person thinks it is, and I agree with him
I made a GitHub repo you can fork/clone to start your own project
jsdoc_output // where docs are generated node_modules // where your vendor stuff goes .gitignore mocha-webpack.opts // specify a different webpack config for testing package.json README.md webpack.config.base.js webpack.config.js // extends base config webpack.config.test.js // extends base config public | index-bundle.js // webpack generated bundle | index.html | index.js // webpack | \---superAwesomeComponent componentStylez.sass componentTemplate.html fancyJsModule.js theComponent.js theComponent.spec.js theComponentController.js
Generated using tree /a /f on windows
Let’s check
index.html
<body ng- </super-awesome-component> <super-awesome-component </super-awesome-component> <p> A variable on the controller above the components: {{IndexCtrl.fancyValue}} </p> </body> <tail> <script src="index-bundle.js"></script> </tail>
No action has been done yet
You can see here, that our two buttons are the two super-awesome-component elements. These are Angular 1.5 components.
Angular 1.5 componentsAngular 1.5 components
Angular 1.5 components are just directives with better default values. They are always elements, there is a default “Controller as $ctrl”, and they have isolate scopes. Most of what I've learned about components, I learned them here.
The components have two bindings, some-input and some-output.
These components are useful because they allow us to encapsulate a combination of view and controller functionality. Let’s look at the component file:
import template from './componentTemplate.html' import componentStylez from './componentStylez.sass' import {ComponentController} from './theComponentController.js' const bindings = { someInput: '<', someOutput: '&' } export const theComponent = { controller: ComponentController, template, bindings }
Notice how each element of the controller can be re-used. The controller can be specific to this component, or it could be a controller that is used elsewhere.
Furthermore, this file contains references to everything you need to know about the component. The component is totally self-contained, you don’t need to worry about how it is being used in the larger application in order to make it.
The controller makes use of normal ES6 features — I won’t go into how it works, but take note of how the class structure is used, and the lack of
\$scope. The result is a framework-agnostic controller, minus the component lifecycle event (
\$onInit).
import fancyFunction from './fancyJsModule.js' /** * Provides handlers for theComponent */ class ComponentController { /** * Announces that input bindings aren't defined * @return {undefined} undefined */ constructor () { console.log('input bindings arent defined!', this.someInput) } /** * Calls someOutput with the value of someInput put in fancyFunction * @return {undefined} undefined */ doSuperThings () { console.log('doing super things') this.someOutput({value: fancyFunction(this.someInput, 3)}) } /** * Announces that input bindings are defined * @return {undefined} undefined */ $onInit () { console.log('input bindings are defined!', this.someInput) } } export { ComponentController }
StandardJS formattingStandardJS formatting
The obvious difference is the lack of semicolons. I personally believe this provides cleaner looking code and the StandardJS linter/formatter neatly solves the issues around semicolon usage, which will prevent you from encountering weird issues there.
Webpack (which can be confusing)Webpack (which can be confusing)
Notice how all we have to do is to import
index.bundle.js in
index.html. This is because we are using Webpack, which bundles all of our assets into a single file. This includes our templates, JavaScript, CSS, and anything you can imagine needing in there.
Webpack is a finicky beast, and a beast it is. It’s complicated enough that people put it on their resumes. It moves a lot of complexity from various parts of your application, into your
webpack.config.js file.
Evidence of this complexity can be found in the fact that we have cause for 3
webpack.config*.js files. One provides a base, the second is to accomodate our testing setup, and the third is for splitting code in to vendor chunks (which we don’t want to do in our test setup do to strange interactions with the
CommonsChunkPlugin).
var path = require('path') var webpack = require('webpack') module.exports = { entry: { 'index': path.join(__dirname, '/public/index.js') }, output: { filename: '[name]-bundle.js', path: path.join(__dirname, '/public/'), devtoolLineToLine: true, pathinfo: true, sourceMapFilename: '[name].js.map', publicPath: path.join(__dirname, '/src/main/webapp/') }, module: { loaders: [ { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ }, { test: /\.css$/, loader: 'style-loader!css-loader' }, { test: /\.sass$/, loaders: ['style-loader', 'css-loader', 'sass-loader'] }, { test: /\.html$/, loader: 'raw-loader' }, // inline base64 URLs for <=8k images, direct URLs for the rest { test: /\.(png|jpg)$/, loader: 'url-loader?limit=8192' }, // helps to load bootstrap's css. { test: /\.woff(\?v=\d+\.\d+\.\d+)?$/, loader: 'url?limit=10000&minetype=application/font-woff' }, { test: /\.woff2$/, loader: 'url?limit=10000&minetype=application/font-woff' }, { test: /\.ttf(\?v=\d+\.\d+\.\d+)?$/, loader: 'url?limit=10000&minetype=application/octet-stream' }, { test: /\.eot(\?v=\d+\.\d+\.\d+)?$/, loader: 'file' }, { test: /\.svg(\?v=\d+\.\d+\.\d+)?$/, loader: 'url?limit=10000&minetype=image/svg+xml' } ] }, plugins: [ new webpack.HotModuleReplacementPlugin() ], devServer: { publicPath: '/', contentBase: path.join(__dirname, '/public'), compress: true }, devtool: 'eval' }
I’m not going to explain everything here, because that’s what the webpack docs are for (this link is for Webpack 1 even though we’re using Webpack 2. The Webpack 2 docs are thorough only in their incompleteness, but do see the migrations documentation).
To give an overview, you must specify:
- Where your application starts
- Where the bundle goes
- How you’re going to magically import things
- What plugins you’re using
- Your webpack-dev-server setup
- How your source maps are set up.
What? Plugins? Source maps? Why do I need another server?
PluginsPlugins
Here, we’re just using the HotModuleReplacement (HMR) plugin. It allows our browser to automatically reload when a file is changed. This magically removes one step of the normal iteration of write, save, test.
There are tons of other plugins out there — (here's one that stands out but I haven’t gotten around to trying!)
Here’s a list of popular Webpack plugins (why does Webpack do so many things!)
Source mapsSource maps
Source maps are products of ES6 and bundling. I haven’t figured out how to get them perfect yet — there is an unfortunate speed/quality tradeoff that occurs with sourcemaps, as the perfect ones can be rather slow to create. Our ES6 conversion is achieved through a babel loader.
If we look back at
theComponent.js, this contains most of our Webpack:
import template from './componentTemplate.html' import componentStylez from './componentStylez.sass' import {ComponentController} from './theComponentController.js' const bindings = { someInput: '<', someOutput: '&' } export const theComponent = { controller: ComponentController, template, bindings }
Note how we are import’ing html, SASS, and ES6 here. This is accomplished through our loaders. Which loader is used is based on the file name.
Webpack-dev-serverWebpack-dev-server
Webpack-dev-server is an amazing thing, regardless of whether or not you have a real back-end. It supports HMR and is a static file server, which makes your development fast. In addition, using webpack-dev-server will force you to de-couple your front-end and back-end.
Being able to do front-end development without needing a “real” server is amazing for a lot of reasons. It will force you to create practical mock data, know exactly what functionality belongs to the back-end vs. the front-end, give you HMR, and make your front-end hostable on just about any server, with a clear contract between the front-end and the back-end.
In this setup, webpack-dev-server, along with everything else needed for front-end development, is run by a single npm run dev command, as specified in
package.json:
{ "name": "modern-angularjs-starter", "version": "0.0.1", "description": "Base project", "main": "index.js", "scripts": { "dev": "concurrently --kill-others \"webpack-dev-server --host 0.0.0.0\" \"npm run docs\"", "docs_gen": "jsdoc -r -d jsdoc_output/ public/", "docs_watch": "watch \"npm run docs_gen\" public", "docs_serve": "echo Docs are being served on port 8082! && live-server -q --port=8082 --no-browser jsdoc_output/", "docs": "concurrently --kill-others \"npm run docs_serve\" \"npm run docs_watch\"", "postinstall": "bower install", "webpack": "webpack", "test": "mocha-webpack public/**/*.spec.js" }, "devDependencies": { /* hidden for space */ } "dependencies": { /* hidden for space */ } }
Notice the use of concurrently.
This allows us to run 2 blocking commands in parallel.
Notice there are also testing and documentation commands. The documentation commands generate JSDoc pages and then host them on a small server, which auto-refreshes (similar to HMR) the browser when there is a change. This way, you can watch your docs update as you write them if you save often.
It is not demonstrated in this project, however, specifying types in JSDoc is a good way to specify data-contracts between front-end/back-end. Alternatively, you could just use typescript (there are loaders for that).
Unit Testing (because it’s worth the effort)Unit Testing (because it’s worth the effort)
Testing with ES6 + AngularJS + Webpack is tricky to get right. Each of these causes complications. For unit testing, I ended up settling on very small units, testing my AngularJS controllers as functions in Node. Karma is quite popular, but in my opinion the tests aren’t really unit tests. Nonetheless, it would be useful to have both.
Thus, we have mocha-webpack. This allows us to use imports in our tests, without specifying an entrypoint for each one.
The hardest part about testing here is mocking out ES6 imports. There are a few different ways to do that, but the only one that doesn’t require modifying the file being tested is inject-loader.
This is particularly useful for writing tests where mocking things inside your module-under-test is sometimes necessary before execution — inject-loader.
/* eslint-disable */ import chai from 'chai' import sinon from 'sinon' const theControllerInjector = require('inject-loader!./theComponentController.js') let {expect, should, assert} = chai describe('superAwesomeComponent', function() { let stub let theComponentController let controller beforeEach(function setupComponent () { stub = sinon.stub().returns(1) theComponentController = theControllerInjector({ // The module is really simple, so it's not really necessary to mock it // In a real app, it could be much more complex (ie, something that makes API calls) './fancyJsModule.js': stub }).ComponentController controller = new theComponentController() controller.someOutput = sinon.stub() controller.someInput = 1 }) describe('doSuperThings', function() { it('calls fancyFunction', function() { controller.doSuperThings() assert(stub.calledOnce) }) }) })
To use inject-loader, we use the old require + webpack loader syntax because there isn’t a wildcard filename check we can do for the import (we don’t want all js files to get passed into the inject loader all the time). The return of this require gives us a function that we can call with an object stubbing out various imports:
theComponentController = theControllerInjector({ './fancyJsModule.js': stub }).ComponentController
Here, we stub out
fancyJsModule from our controller’s imports. This allows us to return a mock value, subverting all the logic that module might do, so we can isolate any problems that occur in the test.
We use Chai as our assertion library, Sinon.js for mocking/spying, and Mocha for running the tests.
This test doesn’t attempt to be a good example of what to test, it’s simply to show how testing can be set up with ES6+Webpack+Mocha+Angular.
The goal of this is to force the developer into focusing on writing AngularJS handlers as actual functions. There is a strong tendency for these handlers to be executed purely for side-effects, and creating these tests will highlight that fact.
Soo…Soo…
This architecture provides a way of modernizing AngularJS front-end without making a framework jump. One of the biggest benefits of this approach is that it abstracts away a lot of the AngularJS-specific code.
One of the trickiest elements of this approach is deciding what to use AngularJS modules for vs. what to use ES6 modules for. I try to use ES6 as much as possible. This should make it easier to port an application using this architecture to another framework.
AngularJS still has a fair amount of life to it, but there is no doubt that its prime time has passed. ES6/7, however, are still on the rise.
Long live AngularJS!
By the way, check out my previous rants on JS
This post was originally published by the author here. This version has been edited for clarity and may appear different from the original post. | https://www.codementor.io/narthur157/how-to-modernized-angularjs-1-5-with-es6-webpack-mocha-sass-and-components-7yp0apprt | CC-MAIN-2018-05 | en | refinedweb |
------------------------------------------------------------ revno: 101127 committer: Stefan Monnier <address@hidden> branch nick: trunk timestamp: Wed 2010-08-18 14:10:30 +0200 message: Reindent smie.el modified: lisp/emacs-lisp/smie.el
=== modified file 'lisp/emacs-lisp/smie.el' --- a/lisp/emacs-lisp/smie.el 2010-08-18 12:03:57 +0000 +++ b/lisp/emacs-lisp/smie.el 2010-08-18 12:10:30 +0000 @@ -158,9 +158,9 @@ (if (not (member (car shr) nts)) (pushnew (car shr) last-ops) (pushnew (car shr) last-nts) - (when (consp (cdr shr)) - (assert (not (member (cadr shr) nts))) - (pushnew (cadr shr) last-ops))))) + (when (consp (cdr shr)) + (assert (not (member (cadr shr) nts))) + (pushnew (cadr shr) last-ops))))) (push (cons nt first-ops) first-ops-table) (push (cons nt last-ops) last-ops-table) (push (cons nt first-nts) first-nts-table) @@ -282,7 +282,7 @@ ;; distinguish associative operators (which will have ;; left = right). (unless (caar cst) - (setcar (car cst) i) + (setcar (car cst) i) (incf i)) (setq csts (delq cst csts)))) (unless progress @@ -386,8 +386,8 @@ (cond ((null toklevels) (when (zerop (length token)) - (condition-case err - (progn (goto-char pos) (funcall next-sexp 1) nil) + (condition-case err + (progn (goto-char pos) (funcall next-sexp 1) nil) (scan-error (throw 'return (list t (caddr err) (buffer-substring-no-properties @@ -417,10 +417,10 @@ (let ((lastlevels levels)) (if (and levels (= (funcall op-back toklevels) (funcall op-forw (car levels)))) - (setq levels (cdr levels))) + (setq levels (cdr levels))) ;; We may have found a match for the previously pending ;; operator. Is this the end? - (cond + (cond ;; Keep looking as long as we haven't matched the ;; topmost operator. (levels @@ -462,11 +462,11 @@ (t POS TOKEN): same thing but for an open-paren or the beginning of buffer. (nil POS TOKEN): we skipped over a paren-like pair. nil: we skipped over an identifier, matched parentheses, ..." - (smie-next-sexp - (indirect-function smie-backward-token-function) - (indirect-function 'backward-sexp) - (indirect-function 'smie-op-left) - (indirect-function 'smie-op-right) + (smie-next-sexp + (indirect-function smie-backward-token-function) + (indirect-function 'backward-sexp) + (indirect-function 'smie-op-left) + (indirect-function 'smie-op-right) halfsexp)) (defun smie-forward-sexp (&optional halfsexp) @@ -480,11 +480,11 @@ (t POS TOKEN): same thing but for an open-paren or the beginning of buffer. (nil POS TOKEN): we skipped over a paren-like pair. nil: we skipped over an identifier, matched parentheses, ..." - (smie-next-sexp - (indirect-function smie-forward-token-function) - (indirect-function 'forward-sexp) - (indirect-function 'smie-op-right) - (indirect-function 'smie-op-left) + (smie-next-sexp + (indirect-function smie-forward-token-function) + (indirect-function 'forward-sexp) + (indirect-function 'smie-op-right) + (indirect-function 'smie-op-left) halfsexp)) ;;; Miscellanous commands using the precedence parser. @@ -501,14 +501,14 @@ (forward-sexp-function nil)) (while (/= n 0) (setq n (- n (if forw 1 -1))) - (let ((pos (point)) + (let ((pos (point)) (res (if forw (smie-forward-sexp 'halfsexp) (smie-backward-sexp 'halfsexp)))) (if (and (car res) (= pos (point)) (not (if forw (eobp) (bobp)))) - (signal 'scan-error - (list "Containing expression ends prematurely" - (cadr res) (cadr res))) + (signal 'scan-error + (list "Containing expression ends prematurely" + (cadr res) (cadr res))) nil))))) (defvar smie-closer-alist nil @@ -764,13 +764,13 @@ ;; Obey the `fixindent' special comment. (and (smie-bolp) (save-excursion - (comment-normalize-vars) - (re-search-forward (concat comment-start-skip - "fixindent" - comment-end-skip) - ;; 1+ to account for the \n comment termination. - (1+ (line-end-position)) t)) - (current-column))) + (comment-normalize-vars) + (re-search-forward (concat comment-start-skip + "fixindent" + comment-end-skip) + ;; 1+ to account for the \n comment termination. + (1+ (line-end-position)) t)) + (current-column))) (defun smie-indent-bob () ;; Start the file at column 0. @@ -802,26 +802,26 @@ (save-excursion (goto-char pos) ;; Different cases: - ;; - ;; We're only ever here for virtual-indent, which is why ;; we can use (current-column) as answer for `point'. (let* ((tokinfo (or (assoc (cons :before token) smie-indent-rules) - ;; By default use point unless we're hanging. + ;; By default use point unless we're hanging. `((:before . ,token) (:hanging nil) point))) ;; (after (prog1 (point) (goto-char pos))) - (offset (smie-indent-offset-rule tokinfo))) + (offset (smie-indent-offset-rule tokinfo))) (smie-indent-column offset))))) ;; FIXME: This still looks too much like black magic!! @@ -896,17 +896,17 @@ ;; affect the indentation of the "end". (current-column) (goto-char (cadr parent)) - ;; Don't use (smie-indent-virtual :not-hanging) here, because we - ;; want to jump back over a sequence of same-level ops such as - ;; a -> b -> c - ;; -> d - ;; So as to align with the earliest appropriate place. + ;; Don't use (smie-indent-virtual :not-hanging) here, because we + ;; want to jump back over a sequence of same-level ops such as + ;; a -> b -> c + ;; -> d + ;; So as to align with the earliest appropriate place. (smie-indent-virtual))) (tokinfo (if (and (= (point) pos) (smie-bolp) (or (eq offset 'point) (and (consp offset) (memq 'point offset)))) - ;; Since we started at BOL, we're not computing a virtual + ;; Since we started at BOL, we're not computing a virtual ;; indentation, and we're still at the starting point, so ;; we can't use `current-column' which would cause ;; indentation to depend on itself. @@ -934,12 +934,12 @@ (comment-string-strip comment-continue t t)))) (and (< 0 (length continue)) (looking-at (regexp-quote continue)) (nth 4 (syntax-ppss)) - (let ((ppss (syntax-ppss))) - (save-excursion - (forward-line -1) - (if (<= (point) (nth 8 ppss)) - (progn (goto-char (1+ (nth 8 ppss))) (current-column)) - (skip-chars-forward " \t") + (let ((ppss (syntax-ppss))) + (save-excursion + (forward-line -1) + (if (<= (point) (nth 8 ppss)) + (progn (goto-char (1+ (nth 8 ppss))) (current-column)) + (skip-chars-forward " \t") (if (looking-at (regexp-quote continue)) (current-column)))))))) @@ -1024,8 +1024,8 @@ (defvar smie-indent-functions '(smie-indent-fixindent smie-indent-bob smie-indent-close smie-indent-comment - smie-indent-comment-continue smie-indent-keyword smie-indent-after-keyword - smie-indent-exps) + smie-indent-comment-continue smie-indent-keyword smie-indent-after-keyword + smie-indent-exps) "Functions to compute the indentation. Each function is called with no argument, shouldn't move point, and should return either nil if it has no opinion, or an integer representing the column | http://lists.gnu.org/archive/html/emacs-diffs/2010-08/msg00203.html | CC-MAIN-2018-05 | en | refinedweb |
NOTE: this document is NOT a W3C draft, it is intended for discussion only.
Copyright © 2005.
Dan Brickley and Brian McBride have contributed to the WordNet conversion described in this note through their work in the WordNet Task Force and additional comments and suggestions. @@TODO more ACKs
WordNet [Fellbaum, 1998] is a heavily-used lexical resource in natural-language processing and information retrieval. More recently, it has also been adopted in Semantic Web research community for use in annotation, reasoning, and as background knowledge in ontology mapping tools [@@REFS].. Princeton hosts the conversion of the most recent version of WordNet RDF/OWL at the following URI:
Note that this URI points at the newest version, see WordNet versions for more information. WordNet RDF/OWL is maintained by... [@@TODO ... statement of permanence and updates of this version. The TF should look for an organization that is willing to make a commitment for maintaining WordNet for a longer period of time, say one or two years. This commitment entails making available a version of WordNet RDF/OWL for each new Princeton WordNet version and providing a suitable server to host WordNet RDF/OWL and processing functionality to return CBDs in response to HTTP GETs on WordNet URIs.]
This document is composed of three parts. The first part (Section one). Those who are not familiar with WordNet should read Introduction to the WordNet datamodel and possibly Introduction to the WordNet RDF/OWL schema before reading the Primer. The second part consists of Sections three through eight which give more background information for those who are not familiar with WordNet and describe advanced options. It also provides more background to the decisions taken during conversion. The third part (the Appendices) contains detailed information on the RDF/OWL representation and versioning strategy.: a NounSynset with the synsetId "107909067" and the first noun in the synset has the lexical form "bank". The pattern for instances of Synset is synsetId + lexical form of the first WordSense + lexical group symbol (n=noun, v=verb,a=adjective, s=adjective satellite and r=adverb). The first WordSense in the synset has the lexical form "bank". The pattern for URIs of WordSenses is the lexical form of its Word + the WordSense's lexical group + the sense number. Example:
For the URIs for Words we use the lexical form + the prefix "word-". For example:
Synset AdjectiveSynset AdjectiveSatelliteSynset AdverbSynset NounSynset VerbSynset WordSense AdjectiveWordS.
participleOfproperty has the URI:
Here follow some typical queries that can be posed on the WordNet RDF/OWL once it is loaded into a triple store such as Sesame or SWI Prolog's Semantic Web library [@@REFS] [SWI Prolog, 2006]. The examples are given in SPARQL query language [SPARQL, 2005]. Which query language is available to a user depends on the chosen triple store.: downloading and loading WordNet in RDF/OWL into a triple store this version-specific base URI should be used when querying. The query examples below use version 2.0 as an example. See WordNet versions for more information.
The following queries for all Synsets that contain a Word with the lexical form "bank":
PREFIX wn: <> SELECT ?aSynset WHERE { ?aSynset wn:containsWordSense ?aWordSense . ?aWordSense wn:word ?aWord . ?aWordSense wn:lexicalForm "bank"@en }Notice the addition of the language tag using "@en". This is necessary in all queries for strings. Queries without the correct language tag do not return results.
The following queries for all antonyms of a specific WordSense ("bank"):
PREFIX wn: <> SELECT ?aWordSense WHERE { wn:bank-noun-1 wn:antonymOf ?aWordSense }
The following queries for all Synsets that have a hypernym that is similar to some other Synset:
PREFIX wn: <> SELECT ?aSynset WHERE { ?aSynset wn:hyponymOf ?bSynset . ?bSynset wn adverb
synsetContains:
There are two ways to query RDF/OWL WordNet. The first option is to download the appropriate WordNet version (see WordNet Basic and WordNet Full and Choosing the appropriate WordNet version) and load it into local processing software such as Sesame [@@REF]: <> SELECT theWordSense WHERE { theWordSense wn:word theWord . theWordSense wn:lexicalForm "bank":bank-noun-1 rdf:type wn20:bank-noun-1 rdf:type wn20:NounWordSense wn20:bank-noun-1 wn20:inSynset wn20:107909067-depository_financial_institution-n wn20:bank-noun-1 wn20:word wn20:bank wn20:bank-noun-1 wn20:derivationallyRelated wn20:bank-v-3 wn20:bank-noun-1 wn20:derivationallyRelated wn20:bank-v-5 wn20:bank-noun-1 wn20:derivationallyRelated wn20:bank-v-6 wn20:bank-noun-1 rdfs:label "bank"@en wn20:bank-noun-1 wn20:tagCount "883"@enBecause this WordNet version does not have blank nodes and reified triples, the Consice Bounded Description of a the URI is the same as the result to following SPARQL query:
SELECT ?p ?x WHERE {<> ?p ?x).
Local IDs of Synset instances are composed of the synset ID, the lexical form of the first word sense in the synset and the lexical group symbol. Thus human readers know the lexical group of the word senses in the synset and has an idea about the kinds of words in the synset. Example:
For WordSenses the word + its lexical group + the sense number is used. Example:
For the URI for Words we use the lexical form, which is unique within English, plus the prefix "word-". For example:
The prefix is required to prevent clashes between the property and class names of the schema and the words. For example, the URIs for the class "Word" and the property "antonym" would be the same as the URIs for the words "Word" and "antonym". Another option would be to put the schema in a different namespace than the data, but that results in additional management for users and the maintainers of the WordNet RDF/OWL version. The prefix approach avoids this drawback.
Some words contain slashes which have been converted into underscores when generating URIs. This is done to prevent the slashes to be interpreted as the character used to separate hierarchical components in URIs [IETF, 2005]. For example, the URI for the word "read/write_memory" becomes:
This conversion uses "slash URIs" instead of "hash URIs". See Introducing URIs for Synsets, WordSenses, Words for more information. [@@REFS]..
Below the files to download are listed for version 2.0 of WordNet RDF/OWL.
Files for other versions can be found at @@URL.
The files below use the version-specific base URI
WordNet 2.0 Full consists of the following three files plus any of the files that contain relations that are listed below.
WordNet 2.0 Basic consists of the following three files plus any of the files that contain relations except those that contain relations between WordSenses
Files that contain relations between Synsets: hyponymy, entailment, similarity, member meronymy, substance meronymy, part meronymy, classification, cause, verb grouping, attribute
Files that contain relations between WordSenses: derivational relatedness, antonymy, see also, participle, pertains to
Files that contain other relations: gloss and frame;Synset"> <rdfs:subClassOf rdf: </rdf:Description> <rdf:Description rdf:about="&wn [@@REF MIA tool?]. example, requesting redirected with a 303 response code to e.g. this point the request should return a graph that is deemed an appropriate response to this request, for which we have chosen the Concise Bounded Description. A straightforward way to implement the response is to have a server-side script that collects all the relevant statements from the WordNet RDF/OWL source files. To relieve Princeton of implementing the server-side scripting Princeton responds to the HTTP GET on another 303 redirect to the following URI which is maintained by @@INSTITUTE:
... @@TODO insert base URI of INSTITUTE... /wn20/bank-noun-1/There the server-side scripting is performed and the resulting RDF graph returned to the original requesting server. aare.
Example: hyp(100003226,100003009). [organism, living_thing].
Example: cls(100004824,105681603,t). [cell, biology]
Maps to:
Inverse property: @@TODO
Superproperty: classifiedBy)
@@TODO: this seems symmetric relation, but inverse is specified in the schema)
@@TODO: is this a subproperty of rdfs:seeAlso?.
Maps to:
Inverse property: @@TODO
The fr operator specifies a generic sentence frame for one or all words in a synset. The operator is defined only for verbs.
Maps to: wn:frame(VerbWordSense, xsd:string)icalLabel rdfs:subpropertyOf rdfs:label. For WordSense the contents for the rdfs:label is chosen by copying the contents of the wn:lexicalLabel.
Caveat: the Prolog source does not contain the Frame definitions. [@@TODO explanation on how to convert Frames part]
and
rdfs:subPropertyOf.
This is only possible if WordNet is a strict specialization of SKOS. In the
second meaning, a set of rules is specified that converts WordNet into instances
of the SKOS schema. This is a more flexible approach and allows for more complex
mappings (mappings other than strict specialization).
A first choice concerns what WordNet class(es) to map to
skos:Concept.
@@TODO. Should all WN classes be regarded as skos:COncepts? Granularity difference. Also difficult choice regarding what to map to skos:prefLabel/skos:altLabel. WordSenses have equal status in WN, no one preferred over the other. If you choose not to make all classes of WN subclass of skos:Concept then you lose information. So seems not possible to define WN as strict specialization of SKOS.
This conversion builds on three previous WordNet conversions, namely by:
In this document we have not tried to come up with a completely new conversion. Rather, we have studied these existing conversions and filled in some of the gaps. Here are some of the typical differences w.r.t the existing conversions:
rdfs:subClassOf. This is an attractive interpretation, but we argue that not all hyponyms can be interpreted in that way. An attempt to provide a consistent semantic translation of hyponymy has been done [Gangemi, 2003], but in this work we do not attempt a semantic translation of the intended meaning of WordNet relations, while we aim at a logically valid translation of the WordNet data model into RDF/OWL.
Words and
WordSenses URIs. The conversion by the University of Neuchatel represents
Words as URIs, but not word senses.
perdenotes (a) a relation between an adjective and a noun or adjective or (b) a relation between an adverb and an adjective. We convert
perinto
adjectivePertainsToand
adverbPertainsTo.
The conversion of Neuchatel is close to the one in this document. The Neuchatel conversion omits relations "derivation" and "classification". It does not provide sub-relations and inverses for all relationships. Both conversions differ from the other two in that they provide OWL extensions in which property characteristics such as symmetry, inverseness and value restrictions are defined.
The motivation for representing
Words
separately is that words are language-specific. The word
"chat" in english has a different meaning than the same
lexical form in French. The French word does have a
lexico-semantic similarity to the English word "cat".
For future integration of WordNet with
other multilingual resources it is essential that one can
refer to two different words with the same lexical form,
or two words with a different lexical form but similar
meanings.
Future integration of WordNet with e.g. WordNets in other
languages is possible because each Word has its own URI
in the WordNet namespace. The WordNet word "cat" can
be linked to the French word "chat" contained in a French
WordNet by linking the URIs from the separate namespaces
to each other in an appropriate manner, e.g. with a
property that represents a lexico-semantic relationship
between words..
Besides introducing WordSenses and Words as separate entities, we also introduce URIs for them (i.e. they are not labels or blank nodes). First we discuss the motivation for representing them as instances of a class with URIs instead of as labels or blank nodes. Then we discuss how the URIs for instances of WordSense, Word and Synset are generated during conversion.
In some previous conversions WordSenses or Words did not have a URI. A possible motivation for this choice is that the source does not provide unique identifiers for them and tasks such as sense disambiguition does not require them. Not having URIs for WordSenses makes it impossible to refer to WordSenses directly and to use them e.g. for annotation. We have chosen to introduce WordSenses as separate entities with URIs to enable such applications. A similar argument holds for Words: not having URIs for them makes it impossible to refer to them directly. Having URIs also makes it possible to integrate Princeton WordNet in RDF/OWL with other WordNets in RDF/OWL. For example, it enables stating relationships between e.g. the WordNet Word "chat" and "chat" in a French WordNet. Although integration is not part of the activities of this TF, it does aim to make it possible in the future. Therefore Words are given their own URI.
Three kinds of class instances need a URI: instances of the classes Synset, WordSense and Word. Instead of generating any unique ID we have tried to use IDs derived from IDs in the source and also tried to make them human-readable.
For the local ID of Synsets we have chosen the synset identifier provided in the source. For human readability we add two redundant elements: the word of the first WordSense in the synset and the lexical group symbol. Example:
Note that because the synset ID is now incorporated in
the Synset URI, an application can only retrieve the ID by
parsing the URI. To circumvent this awkward parsing, we
introduce a property
wn:synsetId for Synset to
store the ID in.
There are two straightforward options for the local ID that make a WordSense unique within WordNet. Firstly, the combination of synset ID + sense number. Secondly, the first word in the synset + lexical group + sense number. We chose the second option as it is more readable. Example:
For the URIs for Words we use the lexical form + the prefix "word-". For example:
The prefix is necessary to prevent URI clashes with entities in the schema (e.g. the URI for property "antonym" or the class "Word" would clash with those for the lexical forms). Another solution would be to place the schema in a separate namespace, but it may become unclear which schema namespace belongs to which data namespace. Having one namespace per WordNet RDF/OWL version does not have this drawback. disadvantage of hash URIs is that when a HTTP GET is done (e.g. for the first.
A natural extension of this work would be to integrate the OWL model with LMF (Lexical Markup Framework) under development by the ISO TC37/SC4/WG4.
[Brickley, 1999] D. Brickley. Message to RDF Interest Group: "WordNet in RDF/XML: 50,000+ RDF class vocabulary". See also.
[OWL Overview, 2004] Deborah L. McGuinness, Frank van Harmelen (eds.). OWL Web Ontology Language Overview, W3C Recommendation 10 February 2004;
;
[IETF, 2005] The Internet Engineering Task Force. Uniform Resource Identifier (URI): Generic Syntax
$Revision: 1.3 $ of $Date: 2006/04/12 14:01:28 $ | http://www.w3.org/2001/sw/BestPractices/WNET/wn-conversion-20060202 | CC-MAIN-2018-05 | en | refinedweb |
ISSUE-182: Allow more than one profile to be used in the SDP-US. Add use of ttp:profile element.
Allow more than one profile to be used in the SDP-US. Add use of ttp:profile element.
- State:
- CLOSED
- Product:
- TTML Simple Delivery Profile for Closed Captions (US)
- Raised by:
- Monica Martin
- Opened on:
- 2012-09-18
- Description:
- Issue: Allow more than one profile to be used in the SDP-US. Add use of ttp:profile element.
Benefit: Allows SDP-US to use TTML 1.0, SDP-US profile URI, and other profiles. TTML 1.0 already defines the mandatory processing semantics for the intersection of required elements of the profile(s) that apply (Section 5.2).[1]
Proposal to add the following:[2]
1. Language in Section 1 that indicates use of other profiles is not precluded; retain URI requirement for this profile.
2. The profile element to the list of elements in R0007, Section 5.2.2.
3. Language in Section 5.4.2 that articulates how multiple profile elements may exist.
-------
[1]
TTML 1.0 Section 5.2: If more than one ttp:profile element appears in a TTML document instance, then all specified profiles apply simultaneously.”
[2] Proposed changes (in bold)
Section 1
This constrained profile enumerates a set of required TTML features, some of which may be constrained in behavior, and the capabilities required of a Presentation Processor in TTML 1.0. The semantics defined in TTML 1.0 apply unless otherwise constrained in this profile.
Claims of conformance MUST use this URI and implement the required features and constraints of use and processing outlined in this profile.
Name Designator
simple-delivery
Conformance to this profile does not preclude the:
• Use of other features defined in TTML 1.0. Such behavior is not defined here.
• Use of other profiles that may implement the features in this profile.
Section 5.4.2
Add to Note (1). NOTE: See also Conformance. TTML 1.0 allows zero or more profiles (ttp:profile in the head element) to be specified and used simultaneously. A player may reject documents it does not understand.
Add to Note (2). NOTE: When the use attribute is used on the ttp:profile element, the use attribute could indicate the geographical region for which the profile is used. For example, specific styling capabilities could be used in a particular geographical region. See also Other Constraints.
Requirement R0007
Add ttp:profile element to the list.
- Related Actions Items:
ACTION-117 on Monica Martin to Add other features related to ttp:profile to Issue-182. - due 2012-10-25, closed ACTION-122 on Glenn Adams to Implement agreed original + additional proposal for Issue-182. - due 2012-11-01, closed
- Related emails:
-)
- RE: TTML Agenda for 15/11/12 (from mdolan@newtbt.com on 2012-11-15)
- TTML Agenda for 15/11/12 (from Sean.Hayes@microsoft.com on 2012-11-15)
- RE: Issue 1 for R0007 Raised with Issue-182 (...WAS: RE: Update to Issue-182 Proposal and Completion of Action-117) (from momartin@microsoft.com on 2012-11-15)
- Issue 1 for R0007 Raised with Issue-182 (...WAS: RE: Update to Issue-182 Proposal and Completion of Action-117) (from momartin@microsoft.com on 2012-11-15)
- RE: Update to Issue-182 Proposal and Completion of Action-117 (from momartin@microsoft.com on 2012-11-13)
- Re: Update to Issue-182 Proposal and Completion of Action-117 (from glenn@skynav.com on 2012-11-13)
- RE: Update to Issue-182 Proposal and Completion of Action-117 (from momartin@microsoft.com on 2012-11-12)
- Re: Update to Issue-182 Proposal and Completion of Action-117 (from glenn@skynav.com on 2012-11-11)
- TTML Agenda for 1/11/12 (from Sean.Hayes@microsoft.com on 2012-11-01)
- RE: Update to Issue-182 Proposal and Completion of Action-117 (from momartin@microsoft.com on 2012-10-25)
- Update to Issue-182 Proposal (from momartin@microsoft.com on 2012-10-18)
- RE: TTML Agenda for 20/9/12 (from Sean.Hayes@microsoft.com on 2012-09-20)
- ISSUE-182: Allow more than one profile to be used in the SDP-US. Add use of ttp:profile element. [Simple Delivery Profile for Closed Captions] (from sysbot+tracker@w3.org on 2012-09-18)
Related notes:
Related to Issue #183: Martin, 26 Sep 2012, 16:47:56
Leave open until we have a solution to Issue-183. Have consistent solution.
Issue-183:
TTWG Oct 11:
TTWG Oct 18:
Add other features, extensions etc. associated with ttp:profile.
TWG Oct 18:
Updated proposal:
See request for update Oct 18 minutes.
Updated proposal sent Oct 25:
TTWG Oct 25:
Resolved TTWG Oct 25 implement the original and the updated proposals.
Original proposal is in Issue-182.
Updated additional proposal sent Oct 25:
TTWG Oct 25:
Implement original proposal in Adams, 11 Nov 2012, 23:32:55
TTWG Nov 15:
Agreed to add preamble text before R0007.
If a reference to an element type is used in this specification and the name of the element type is not namespace qualified, then the default TT Namespace applies. The semantics on use of namespaces (for example, use of ttp:profile) defined in TTML 1.0 apply.
Remainder of Issue-182 is complete.
Editor needs to add elements to R0007 for profile: extension, extensions, feature, features.
Then add preamble text.
Completed edit requests:
TTWG Nov 29 approved editor updates: Martin, 2 Dec 2012, 23:12:23
Display change log | http://www.w3.org/AudioVideo/TT/tracker/issues/182 | CC-MAIN-2018-05 | en | refinedweb |
I was recently writing a component where I had some input fields that a user could use to filter down a data set. Given this filtering could be an intensive operation I wanted to "debounce" the input control so that the filter wasn't run on every key stroke. If you're not familiar with what "debounce" means, the gist is that I don't want to react to changes in the input field until a certain amount of time has passed since the user changed the value. You typically debounce after something like 250 milliseconds, so the delay isn't annoying, but enough to be meaningful.
Now in Angular 1 where was a really nice way to debounce any input by using the
ng-model-options (docs) directive like so:
<input type="text" name="userName" ng-
This would automatically ensure that the
user.name property wasn't updated until the input's value hadn't changed for a second. Let's talk about how we can do this in Angular 2, and something to watch out for...
Debounce in Angular 2
To debounce an input field with Angular 2 the approach I'll take here is to use the
FormControl class provided by the brand new
@angular/forms package. The
FormControl class is one of the basic building blocks for forms, and replaces the
Control class available in the Angular releases prior to RC2. The reason we use this class is that it can provide changes to the
<input>'s value as an observable. We'll walk through a specific example in a minute, but briefly, in my component I can define a
FormControl property and subscribe to its
valueChanges observable (code) like so:
filter = new FormControl(); constructor() { // Subscribe to changes for the value of the control // this.filter.valueChanges.subscribe((filter: number) => { // Do something with the updated value }); }
and then in my template I can associate an input field with that property using the
[formControl] directive:
<input type="number" [formControl]="filter" />
Now there are easier ways to bind the input fields to properties on your component, so why bother with this? Well it's so we can leverage all the rxjs operators available, and in this case they include
debounceTime and
distinctUntilChanged. Let's look at the real example to see how they're used.
The example
As with most of my articles I wrote a simple example to help me experiment with this idea. This example has a list of words that I wanted to filter down using a text box that the user can type into. I want to debounce the input and only fire when the value has actually changed, so that's where the rxjs operators come in. Here's the desired behavior, and in this demo notice that as I type the control never loses focus...it just updates after the debounce timeout completes (500ms in this case):
Note, I think the timeouts were lost a little when making the animated gif, but those last changes didn't happen until the 500ms time elapsed
Ok, so what really happened?
Now what really happened is I tried to implement this idea, and was presented with the following behavior. Notice that as I type there is code showing the filter is updated, but the view won't react until some other event happens (in this case I click/tab outside of the input so it loses focus):
Here's what I had for my
valueChanges subscription (here the
FormControl is called loremFilter):
this.loremFilter.valueChanges .debounceTime(500) .distinctUntilChanged() .subscribe((filter: string) => { console.log("New lorem-filter:", filter); this.processLoremFilter(filter); });
...so this seemed really straight-forward, but just didn't work as I expected. The reason has to do with how Angular knows that it should check the component for changes. In particular, I had a "real-world" component that I wanted to use the
OnPush change detection strategy on, and when that's used Angular updates to the view only when a few specific things happen. If you haven't seen it yet, go check out Pascal Precht's great article and talk on how it all works.
In my case when the subscription called the
processLoremFilter() method the internal data of my component was indeed updated, but because this happened in the
Subscribe handler, and I was using the
OnPush strategy, Angular didn't see this as a trigger for running change detection. When I clicked, or tabbed, or pressed a key then change detection would run, and the list in the view would update.
There are two ways to resolve this:
- Don't use the
OnPushchange detection strategy
- Explicitly tell Angular to check for changes in my
Subscribehandler
Let's explore that second option a bit more shall we?
The final demo
Once I found out what was happening things made a lot more sense, and to help show how to work around this I created this final demo:
This version of the example provides a checkbox to let me switch whether or not I want to manually trigger change detection within my
Subscribe handler. This works because Angular provides something called the
ChangeDetectorRef (docs) that can be injected into a component. An instance of
ChangeDetectorRef can be used to manipulate how changes are handled in the component's tree. In our case I use it to force Angular to check for changes when it normally wouldn't.
Here's the AppComponent used for this demo, and recall this came from a more complex scenario so this is a bit contrived:
import { Component, ChangeDetectionStrategy, ChangeDetectorRef } from '@angular/core'; import {REACTIVE_FORM_DIRECTIVES, FormControl} from '@angular/forms'; // We're using a couple operators from rxjs, so we need to import them // import "rxjs/add/operator/distinctUntilChanged"; import "rxjs/add/operator/debounceTime"; @Component({ selector: 'my-app', // // KEY IDEA // Using the OnPush strategy will cause issues if you're not careful // and still learning what will trigger Angular to check for changes // changeDetection: ChangeDetectionStrategy.OnPush, // // Just load the template so it's not muddy-ing up the component // templateUrl: "/app/app.template.html", // // Include the form directives from the new forms release // directives: [REACTIVE_FORM_DIRECTIVES] }) export class AppComponent { markForCheck = false; lorem = "Bacon ipsum dolor amet beef ribs sirloin short loin tenderloin turkey brisket shankle jowl pig leberkas. Tongue doner porchetta, cupim pork belly frankfurter cow chuck corned beef tenderloin flank alcatra jerky turducken meatloaf. Frankfurter beef ribs ham hock, pancetta cupim bresaola meatball ball tip tongue t-bone sausage ground round tenderloin strip steak. T-bone swine ball tip, sirloin landjaeger boudin turkey drumstick shankle meatball biltong filet mignon tail short ribs. Shank beef boudin filet mignon"; filteredLorem: string[]; loremFilter = new FormControl(); constructor( // Inject an instance of Angular's change detector ref // changeDetectorRef: ChangeDetectorRef ) { this.filteredLorem = this.lorem.split(" "); // Subscribe to changes in the input using the valueChanges observable // so we can use some nice rxjs operators // this.loremFilter.valueChanges .debounceTime(500) .distinctUntilChanged() .subscribe((filter: string) => { console.log("New lorem-filter:", filter); // Do the actual work to filter out words that don't match // the current filter value // this.processLoremFilter(filter); // Just leverage the boolean bound to the checkbox in the template // if (this.markForCheck) { // Here we use our change detector instance to tell Angular that this // component should be checked for changes. Without this the updates // just made above won't be reflected in the UI when using 'OnPush' // changeDetectorRef.markForCheck(); } }); } /** * Updates the filteredLorem array to only those words that contain the * provided filter. If the filter is empty, then the original lorem string * is used. */ private processLoremFilter(filter: string): void { let split = this.lorem.split(" "); if (filter === "") { this.filteredLorem = split; } else { this.filteredLorem = split.filter((word) => { return word.indexOf(filter) > -1; }); } } }
Just to see this in action here's a final GIF where I show the difference in behaviors. Again note when the checkbox isn't checked, the list isn't updated until some other event outside of the observable happens:
Wrapping up
In this article I walked through a small "gotcha" I ran into while using observables to debounce
<input> elements in my Angular 2 templates. I showed how using the
OnPush change detection strategy can cause strange behavior if you're still learning how to use it properly. Finally, I showed how you can work around this behavior by using the
ChangeDetectorRef class to manually mark a component to be checked for changes. All of the code for this example, and any others I've published, are available in my github repo:
Thanks for reading, and please leave any feedback or questions in the comments. | https://blog.sstorie.com/onpush-gotcha-when-using-angular-2-form-control/ | CC-MAIN-2018-05 | en | refinedweb |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Class Methods2:47 with Kenneth Love
Let's add some function to our classes (class functions are called methods) so we can do fancier things!
- 0:00
Attributes are great.
- 0:02
But lots of times we want our classes to have conditional actions or
- 0:06
give us back something that's been calculated.
- 0:08
So, we'll write functions in our classes.
- 0:11
Functions that belong to classes though are called methods.
- 0:14
They're the same piece of Python.
- 0:16
They just belong to a class so we give them a new name.
- 0:20
Let's go back to our monster.py and back to our monster class again.
- 0:25
And let's give it a battle cry function.
- 0:28
And this function will shout whatever the creature says.
- 0:33
So we're gonna need to add another attribute as well.
- 0:36
So, before I finish writing that, let's add a sound.
- 0:39
And we'll say that the default sound is a roar.
- 0:42
All right, so def battlecry and
- 0:44
it has to take an argument called self and we're going to return self.sound.upper.
- 0:52
So what is this self argument to our method?
- 0:54
Except in some special cases, every method that you create on a class takes,
- 0:59
at the very least, the self argument.
- 1:01
Self always represents the instance that you're calling the method on.
- 1:05
But you don't ever have to pass it in yourself, but you do have to write it.
- 1:08
It doesn't have to be called self.
- 1:10
That's just kind of the general consensus that everyone uses.
- 1:15
Handily though, inside of our method,
- 1:16
we can use the self variable to get information from the current instance, so
- 1:21
let's go back to the console and try again.
- 1:26
So we'll go back here.
- 1:27
Let's, let's make this a little bigger so we can see that.
- 1:29
All right.
- 1:30
And we do Python, and
- 1:33
from monster import monster, and let's do monster.battlecry.
- 1:46
And we got a type error.
- 1:48
Battlecry is missing a required positional argument.
- 1:51
So the reason that we got that error is because we tried to
- 1:56
call this on the class and not on an instance of the class.
- 2:00
So, let's make a new instance.
- 2:03
We'll do Jubjub again, we go to a monster.
- 2:08
And let's do Jubjub.battlecry and then we get ROAR in all caps.
- 2:15
So, I'm not sure that a Jubjub bird would roar.
- 2:19
But at least our method worked, so let's change the sound of our monster.
- 2:23
So instead of saying jubjub bird, let's say that the sound is equal to tweet.
- 2:30
And now let's call jubjub.battlecry again, and now we get TWEET, in all caps.
- 2:38
That's a much more appropriate sound for a giant killer bird.
- 2:42
That's a pretty simple method.
- 2:43
Let's look at a more complicated but more useful example in our next video. | https://teamtreehouse.com/library/class-methods-2 | CC-MAIN-2017-17 | en | refinedweb |
:
#include <stdint.h> #include <stdio.h> typedef enum {State1, State2, State3, Last_State} MainState_t; typedef enum {SubState1, SubState2, SubState3, SubState4, SubState5, SubState6, Last_SubState} SubState_t; void demo(MainState_t State, SubState_t SubState); /* Functions called from nested switch statement. First digit is main state, second digit is substate */ void fn11(void); void fn16(void); void fn24(void); void fn32(void); void fn33(void); void fn35(void); void main(void) { MainState_t main_state; SubState_t sub_state; for (main_state = State1; main_state < Last_State; main_state++) { for(sub_state = SubState1; sub_state < Last_SubState; sub_state++) { demo(main_state, sub_state); } } } void demo(MainState_t State, SubState_t SubState) { switch (State) { case State1: switch (SubState) { case SubState1: fn11(); break; case SubState6: fn16(); break; default: break; } break; case State2: switch (SubState) { case SubState4: fn24(); break; default: break; } break; case State3: { switch (SubState) { case SubState2: fn32(); break; case SubState3: fn33(); break; case SubState5: fn35(); break; default: break; } } break; default: break; } } void fn11(void) { puts("State 1, substate 1"); } void fn16(void) { puts("State 1, substate 6"); } void fn24(void) { puts("State 2, substate 4"); } void fn32(void) { puts("State 3, substate 2"); } void fn33(void) { puts("State 3, substate 3"); } void fn35(void) { puts("State 3, substate 5"); }
The key points are that we have nested switch statements and the substate is sparse. That is the number of substates for main state 1 is different to that of the substates for main state 2 and so on. If you’ve ever been in the situation of having to write a nested state machine like this, you’ll rapidly find that the code becomes very unwieldy. In particular functions many of hundreds of lines long with break statements all over the place are the norm. The result can be a maintenance nightmare. Of course if you end up going to three levels, then the problem compounds. Anyway, before looking at a pointer to function implementation, here’s the output from the above code:
State 1, substate 1
State 1, substate 6
State 2, substate 4
State 3, substate 2
State 3, substate 3
State 3, substate 5
In addition, using IAR’s AVR compiler, the code size with full size optimization is 574 bytes and the execution time is 2159 cycles, with the bulk of the execution time taken up by the puts() call.
Let’s now turn this into a pointer to function implementation. The function demo becomes this:
void demo(MainState_t State, SubState_t SubState) { static void (* const pf[Last_State][Last_SubState])(void) = { {fn11, fnDummy, fnDummy, fnDummy, fnDummy, fn16}, {fnDummy, fnDummy, fnDummy, fn24, fnDummy, fnDummy}, {fnDummy, fn32, fn33, fnDummy, fn35, fnDummy} }; if ((State < Last_State) && (SubState < Last_SubState)) { (*pf[State][SubState])(); } }
Note that the empty portions of the array are populated with a call to fnDummy(), which as its name suggests is a dummy function that does nothing. You can of course put a NULL pointer in the array, and then extract the pointer, check to see if its non-NULL and call the function, However in my experience its always faster to just call a dummy function.
So how does this stack up to the nested switch statements? Well as written, the code size has increased to 628 bytes and cycles to 2846. This is a significant increase in overhead. However the code is a lot more compact, and in my opinion dramatically more maintainable. Furthermore, if you can guarantee by design that the parameters passed to demo() are within the array bounds (as is the case with this example), then you can arguably dispense with the bounds checking code. In which case the code size becomes 618 bytes and the execution time 2684 cycles. It’s your call as to whether the tradeoff is worth it.
I’m glad to know that you are back.
Nice to see you back & that the whole embeddedgurus site is seemingly springing back to life
Regarding the topic of function pointer jump tables, I would like to point out the importance of using a lot of defensive programming. Because if you have a single bug in them, the whole program will go haywire. To use the last enum member of the states as array size for the jump table, as shown in Nigel’s example, is very good practice. As is the boundary checking of whether the enum is in range.
Be aware though, that C enums use an inconsistent type system. An enumeration constant State1 is guaranteed to have type signed int, which is not necessarily of the same type as an enumerated type MainState_t. If you think that sounds too stupid to be true, see C11 6.7.2.2. This is a flaw in the C language and a potential source of bugs in embedded systems, where compilers often implement enumerated types as equivalent to uint8_t. (A MISRA-C static analyser tool would find such bugs, while manual code review is less likely to do so.)
For the sake of readability, it may also be wise to hide away the obscure function pointer syntax behind typedefs. Here’s an almost identical example for a state machine, that I wrote not long ago:
Though I’m not so sure if a multiple dimension array is the best way to implement such a state machine. The most likely scenario in an object-oriented design, is that the sub states are internal for each state. For example, in a “SPI communication state”, the caller couldn’t care less about whether the SPI driver is idle, sending or receiving, as long as it is doing its job. In that case the sub states should exist as static “private” variables inside the state, and get handled internally.
Thanks for the welcome back. Defensive programming is important. The original article I wrote on pointers to functions has a fair amount of description on defensive techniques when using function pointers, and also suggests using typedefs to hide the declaration complexity.
I am interested in the difference in signedness of enum constants and enum types! I knew the constants were signed int, and I have lacked the ability to decide the signedness of the type. I assumed (wrongly, apparently) that the type was also signed, which bugged me when doing boundary checks: it feels like a shame to check if a variable is both under MAX *and* positive, when the need for positiveness would have been removed by making the type unsigned. I have even considered creating a separate typedef for the type associated with the enum, although this removes the type checking advantage.
Talking about typedef, don’t you think that `void (*) (void)` is common enough to be a known idiom? It’s used only for the array declaration, I don’t really see the need of typedefing it away.
Regarding the need to typedef away void(*)(void), C speaks for itself. Compare these lines:
func_t function (func_t p);
and
void(*function (void(*p)(void)))(void);
Are you really certain you don’t want to use typedef after all?
Sure, that’s a good example. It depends on the usage. If the only usage you have is that described in the article (one array of void (*) (void)), I don’t see it as confusing enough to typedef it.
Nice to have you back!
This situation (although I’ve only used it for one-dimensional arrays) is one where C99 helps a lot, with named initializers. It’s much harder to associate a function with the wrong index, and the indexes that are not mentioned are initialized to NULL (which would favor checking for non-NULL before dereferencing, which I always do anyway):
static void (* const pf[Last_State][Last_SubState])(void) =
{
[State1] = {
[SubState1] = fn11,
[SubState6] = fn16
},
[State2] = {
[SubState4] = fn24
},
[State3] = {
[SubState2] = fn32,
[SubState3] = fn33,
[SubState5] = fn35
}
};
Maybe you could try this and check size and execution time?
(let’s see if code formatting in comments has improved).
I’d forgotten about this Gauthier. It’s a nice suggestion. Code size = 624, cycles = 2744, plus it has the advantage of checking for NULL.
Mr. Gauthier,
I guess this feature is called designator or designated initializer (somewhere in section 6.7.8 from ISO/IEC 9899:1999).
Regards,
This is a good technique that everyone should understand – thanks for the post.
Greater code size due to the empty functioning pointers makes sense but any thoughts on why execution time goes up? Indexing into the array is a pretty simple operation.
I strongly suspect that IAR’s optimizer is working out that the array is so sparse that some sort of nested if is more efficient. I suspect that with a much larger state machine the PtoF would compare much more favourably.
I suspect that the execution time is slower because of all the extra calls to fnDumy(). If you compared to NULL instead, then it would probably be faster by avoiding the overhead of a function call for each dummy state, as there are more transitions to dummy states than useful transitions. However, in a real system, I’m not sure it would be realistic to have such a high proportion of transitions to ‘dummy’ states.
The bigger advantage of this is that since it is a table lookup, the time per execution of demo() is likely to be close to constant, which isn’t much less likely to be the case for nested switch statements.
I’m sorry to be such a party pooper, but the example code is not very good. I understand that the purpose here is just to illustrate the use of function pointers, and state machines are definitely the “killer app” for it. (I’ve even devoted the Section 3.7.1 of my book “Practical UML Statecharts in C/C++, 2nd Ed.” to discussing the role of pointers to functions in state machine implementations.)
But the state machine examples with or without pointers to functions are rather bad and I would *not* recommend to use any such code in real projects.
So, what exactly I dislike about the example (the last one with pointers to functions)?
First, the code seems to be inspired by the venerable state-table technique, but is confusing because the second dimension is used by “sub-states” (instead of events). The use of sub-states would suggest some sort of state hierarchy, but this is certainly not a hierarchical state machine. So what is it?
Second, presumably there are some state transitions in this state machine, but none of the functions illustrates how to achieve a transition (with a global variables “state” and “substate” ?).
Third, the technique in the original post requires enumerating the “states” (and “substates”!). Better techniques don’t require enumerating states (only events). For example, every state can be mapped to a function (state-handler function). Then a single pointer-to-function “state variable” can always point to the currently active state. A state transition in this case simply changes this pointer-to-function. This avoids the need to enumerate states.
Anyway, while I definitely agree that state machines are the best application for pointers to functions, the actual implementation matters.
Actually Miro, the point of the post was to answer a question I’d received about how to replace nested switch statements with a sparse 2-D array of function pointers. Having said that, as Lundin alluded to in an earlier comment, I agree this isn’t necessarily the best way to implement a state machine, and in particular a hierarchical state machine.
I would even take out the second dimension and build a totally separated state machine (I suppose that is the kind of things Lundin meant with his object-oriented comment).
Interesting point about storing a pointer to function instead of a table index! I have used the table-based state machine (very much like Nigel’s example) in the past, and sanity-checked the index value as well as non-NULLness of the function pointer at that index. I am not very sure why, but it seemed a good idea to check if the state variable (static linkage, not a “real” global) was within boundaries. You cannot easily do that with storing only the function pointer, can you?
Again, I’m not sure why I worry the function pointer would get an erroneous value, but if it does then it’s hard to detect. You could argue that if it could, then why couldn’t the content of the function pointer array, or the state variable within its boundaries…
It is often useful to do something special on the entry of a state. How do you detect that, do you compare the previous function pointer value to the current one, and if it’s different you got an entry? Or do you create an additional entry state for all states?
I’ve used this technique a few times. However, I’ve found that it becomes even more useful when expanded upon.
I create a structure:
{
{ state1, fptr1 },
{ state2, fptr2 }
{ ……. , ….. },
{ NULL, errFctnPtr }
};
Then the code will search the table for a match on state.
The code then contains a loop indexing through this table comparing the state against the state entries in the table.
If there’s a match, then run the function. If it gets to the NULL entry, then execute the errFctn.
This makes the code very simple and modular. It makes it easy to read. Any new states can be added
to the table very easily. This works especially well for handling communications,
where the state is the received command.
Sorry if off topic, and the article is an interesting exercise, but until somebody learns me better it seems to me that discussions about state machines need go no further than the Quantum Platform components. Correct, document-able, maintainable, understandable, portable, affordable, recognizable, orthogonal, and fun.
I have mixed feelings on this.
Having written many, many state machine based pieces of code (especially communication protocol implementations and drivers) over the years, my preference is to always prefer obvious over compact / efficient.
Sometimes Obvious means you use nested switch statements, simply because a reading of the code makes the execution conditions, input conditions, etc…. kinda obvious. When adding / modifying / maintaining code, obvious things tend to be understood faster – even if less conceptually elegant.
One of the other benefits of switch statements is that the legal values and bounds are checked; you can use default clauses, so sanity checking of values, parameters, etc is easy. (And in a state machine, EVERY switch statement should have a default clause that allows recovery in the event of insanity.)
All embedded systems – but especially those that run for long periods of time – should handle crazy values that are illegal – single event upsets are real and will cause (very) infrequency changes of variables in RAM – by perhaps only a single bit. So state variables should be bounds checked and a recovery process should be coded in every case.
Use of function pointers makes the bounds + sanity checking process far more difficult. The table of static function pointers in most embedded system would be placed into ROM / Flash, and thus is not affected. So what should be worried about is the index into that table – which should always include checking and sanity recovery. Even then, the extremely rare runaway due to single event upsets is still possible.
My experience of this kind of thing is that in many (most?) cases, the code can be refactored in some way that removes, reduces, or simplifies the nested switches. And that approach, whilst keeping things obvious, and avoiding function pointers is probably the better way to go. It may require more thought – but like most highly skilled jobs, that’s what we are here for. (also known as: if it were easy, everyone would be doing it.)
Some notes:
1. Function pointers are with no doubt a very useful mechanism.
2. Replacing switches with function pointer tables (the concepts is called lookup table…) may have some advantages in some situations but it can bring a lot of disadvantage as well (code size, readability, robustness).
3. the examples above are not state machines at all – nevertheless the source code suggests it (“state, substate”) Whats missing is a mechanism which ties states and events together to transitions. The above examples only call certain predefined functions depended on two parameters. Thats all.
The usual way to encode state transitions into a 2-dimensional table which maps every possible input state with every possible output state to an action results in huge ROM memory footprints if there are more than some trivial states. The memory complexity is O(n^2). which is not a good idea for micro controllers with limited flash ROM sizes. | http://embeddedgurus.com/stack-overflow/2014/03/replacing-nested-switches-with-multi-dimensional-arrays-of-pointers-to-functions/ | CC-MAIN-2017-17 | en | refinedweb |
In this asp.net tutorial you will learn how to export div data to excel. Sometime we may have got requirement to export the div data to excel and this tutorial is written for fulfilling that purpose. You can export data that lie within table, paragraphs etc to excel. In this tutorial i have put table inside the div and then export that div. Now let's have a look over how to do so
Export div data to excel in asp.net using Export_div_data.aspx
<form id="form1" runat="server"> <div id="divExport" runat="server"> ">Har> </form> <asp:Button
Simply in .aspx page we have a div, I give id to the div and also used its runat="server" attribute so that div can be accessed from server side, inside div we have a table that data will be exported to excel.
Export_div_data.aspx.cs
using System; using System.Collections.Generic; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Net; using System.IO;//Donot forget to import this namespace protected void Button1_Click(object sender, EventArgs e) { Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.xls"); Response.Charset = ""; Response.ContentType = "application/vnd.xls"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); divExport.RenderControl(htmlWrite); Response.Write(stringWrite.ToString()); Response.End(); }This code is quite simple, I m adding header for .xls file and then using StringWriter and HtmlTextWriter class I am exporting div data into excel.
Happy Coding!!!
6 comments:
divExport contains images and css applied from css files, in this case what to do?
Where are you passing the div ID to the export function? i.e. how does the function know which div contents have to be exported to excel?
My dear by using this below line of code
divExport.RenderControl(htmlWrite);
the funcion will come to know that divExport contents will be exported to excel.
Thanks!
I have a question though, How would you implement this in MVC3 using Razor(.cshtml)?
In your example, since ExpToExcel.aspx.cs and ExpToExcel.aspx are connected, we can just directly call the divID in the .cs file without any problems.
How would we do this using .cshtml files? do we pass the divID from the View(.cshtml file) to the controller(.cs file)? If so, can you give an example of how its done?
Thanks in advance.
hi!! i liked your tutorial is excellent and saved me alot of time but im having a problem im trying to export to .xlsx i tryied application/vnd.ms-excel but is not working any ideas? thanks
Hi isaac! Just for you a quick reference that can solve your problem is
I will write a post that will discuss this topic as well but if you have bit urgency then above reference can solve your problem. | http://nice-tutorials.blogspot.com/2010/04/export-div-data-to-excel-in-aspnet.html | CC-MAIN-2017-17 | en | refinedweb |
Red Hat Bugzilla – Bug 18039
g++ breaks glibc
Last modified: 2008-05-01 11:37:59 EDT
Compiling
extern "C" {
void exit (int);
};
#include <stdlib.h>
with g++ yields:
In file included from foo.cpp:6:
/usr/include/stdlib.h:578: declaration of `void exit (int) throw ()'
throws different exceptions
foo.cpp:2: than previous declaration `void exit (int)'
This breaks autoconf scripts which includes stdlib.h in a AC_TRY_RUN when
the
language is set to c++.
Seems to be the same as described at
...tells a lot about what kind of testing RedHat does before launching a new
product. This is VERY frustrating.
This is not a bug. void exit(int) is a redefinition because the exit() prototype
in stdlib.h throws exceptions.
As bero mentioned, this really is not a bug and it is good current g++
is more strict about user bugs than it used to. Write correct C++
code and you should get of this warning. If current GNU autoconf
still generates this code it should be fixed, will check it out. | https://bugzilla.redhat.com/show_bug.cgi?id=18039 | CC-MAIN-2017-17 | en | refinedweb |
> > I don't think there's anything "special" about my ZClass. It's derived from > Catalog Aware and ObjectManager. I believe I reproduced it without the > Catalog Aware and got the same results. > > I'm using Andy Dustman's version of the MySQLDA and TinyTable v0.8.2. They > both work just fine once I get the objects in the right place. Maybe the > problem just happens to be with these two products, but I have no clue. > > You're welcome to fetch the product at > if you want to give it a > look. It's a one day throw-together port of the issue tracking system used > by the PHP project with modifications for my own needs. > OK, I downloaded it and I think I found your problem (not wure how to remediate this, though). If you look in the source of the management screen<somefolder>/<IssueTrackerInstance>/manage_main, the dropdown list for adding Product looks like this: <FORM ACTION="<somefolder>/<IssueTrackerInstance>/" METHOD="GET"> <SELECT NAME=":method" ONCHANGE="location.href=''+this.options[this.selectedIndex].value"> <OPTION value="manage_workspace" DISABLED>Available Objects <OPTION value="manage_addProduct/OFSP/documentAdd">DTML Document <OPTION value="manage_addProduct/OFSP/methodAdd">DTML Method <OPTION value="manage_addProduct/MailHost/addMailHost_form">Mail Host <OPTION value="manage_addTinyTableForm">TinyTable <OPTION value="manage_addProduct/OFSP/manage_addUserFolder">User Folder <OPTION value="manage_addZMySQLConnectionForm">Z MySQL Database Connection </SELECT> <INPUT TYPE="SUBMIT" VALUE=" Add "> </FORM> As you see, most of the items have a manage_addProducts/.... as a start. Not so with TinyTables and MySQLConnection. They call the add_TinyTableForm and manage_addZMySQLConnection form. They do not switch the namespace to manage_addProduct (not in the form). Why this is a problem, I can't tell, but this _is_ the problem. I'm not quite sure about the solution. Probably it's best to make a custom manage_main form that does the right incantations for adding products and then map this to your Contents View in the ZCLass definition. As a side I'd like to remark that all products should comply with the same manage_addProduct interface, because the current situation leads to nasty problems. Rik _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
Re: [Zope] Problem with adding items to ZClass instance
Rik Hoekstra Tue, 20 Jun 2000 01:59:46 -0700
- [Zope] Problem with adding items to ZClass instance | https://www.mail-archive.com/zope@zope.org/msg02349.html | CC-MAIN-2017-17 | en | refinedweb |
By Alvin Alexander. Last updated: June 3 2016. As you’ll see from the example, you open and read the file in one line of code, and everything else is boilerplate:
import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; public class JavaImageIOTest { public JavaImageIOTest() { try { // the line that reads the image file BufferedImage image = ImageIO.read(new File("/Users/al/some-picture.jpg")); // work with the image here ... } catch (IOException e) { // log the exception // re-throw if desired } } public static void main(String[] args) { new ImageIOTest(); } }
As you can see from this sample Java code, the ImageIO class
read method can throw an
IOException, so you need to deal with that. I’ve dealt with it using a
try/catch block, but you can also just throw the exception.
Thanks
Thanks
Add new comment | http://alvinalexander.com/blog/post/java/open-read-image-file-java-imageio-class/ | CC-MAIN-2017-17 | en | refinedweb |
Those days are over. Not only was that very cludgey but wasn't all that effecient as well. Now with .NET we can create a CLR function to do all the heavy lifting.
First you will need to create a Database Project. For this post I will be documenting how I did it using Visual Studio 2010. First thing you will do is open up your IDE and navigate to "New Project --> Database --> SQL Server". Take special note on how you name this project. I would name it something like {dbName}CLR as it will be the project where you will put all of your CLR Functions for a specific database.
Next you will then "Add a New Item" and choose the type "Class". This class file can be named anything, but I would keep it sort of generic as it will most likely be the same project that you add all of your functions to.
Next cut and paste the following code below into your class file:
using System; using System.Collections; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class UserDefinedFunctions { [SqlFunction(Name = "fnToList", FillRowMethodName = "FillRow", TableDefinition = "ID NVARCHAR(255)")] public static IEnumerable SqlArray(SqlString str, SqlChars delimiter) { if (delimiter.Length == 0) return new string[1] { str.Value }; return str.Value.Split(delimiter[0]); } public static void FillRow(object row, out SqlString str) { str = new SqlString((string)row); } };
Before you can deploy the class, you will need to enable clr on your SQL Server. To do this you can execute the following commands:
sp_configure 'clr enabled', 1;
reconfigure with override;
reconfigure with override;
After you compile the Class, which will hopefully be error-free, you can then Deploy the project. When you deploy the project it will push the Assembly to the database that you specified during the Project creation. Visual Studio will also create the user defined function for you.
If you did everything correctly you should see in SSMS in your Table-valued Functions a new function.
To test this, you can simply run the following:
nd it should produce the following output: | http://throwex.blogspot.com/2010/11/sql-clr-function.html | CC-MAIN-2017-17 | en | refinedweb |
So I'm wondering how to round a double to the nearest eighth in C (not C++, C#, or Java. I've tried searching the answer before posting here, and that's the only languages I found such a tutorial for.) Does anyone have an idea on how to do this?
Thanks in advance,
Peter
As you stated, you want your number rounded up to the nearest 1/8th.
#include <math.h> #include <stdio.h> double roundToEight(double value) { return ceil(value*8)/8; } int main() { printf("%f\n",roundEight(12.42)); //12.500 printf("%f\n",roundEight(12.51)); //12.625 printf("%f\n",roundEight(12.50)); //12.500 printf("%f\n",roundEight(-0.24)); //-0.125 printf("%f\n",roundEight(0.3668)); //0.375 return 0; }
If you want negative numbers to be rounded down instead, you can put an
if statement there and use floor() instead of ceil() on the negative branch. | https://codedump.io/share/YeHniNw5E7IQ/1/how-to-round-a-decimal-to-the-nearest-eighth-in-c | CC-MAIN-2017-17 | en | refinedweb |
Compare Iso Osi Model And Stack Computer Science Essay
Published: Last Edited:
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
First of all, our focus of this assignment is based on application layer. We brief through the application layer functions and standards. The Open Systems Interconnection or OSI reference model is a layered representation created as a guideline for network protocol design. The OSI model has seven logical layers, each of which has unique functionality and to which are assigned specific services and protocols. The information is passed from one layer to the next, starting at the Application layer to the Physical layer and vice versa. The Application layer is the layer that provides the interface between the applications we use to communicate and the underlying network over which our messages are transmitted and are used to exchange data between programs running on the source and destination hosts. The figure below shows the model:
Application Layer - compare ISO OSI model and stack TCP/IP Model
From the many protocols of application layer, our focus will be the POP protocol, which is an Internet electronic mail standard that specifies how an Internet-connected computer can function as a mail-handling agent. Firstly, messages arrive at a user's electronic mailbox, which is in the service provider's computer then from this central storage point; you can access your mail from different computers. In both case, a POP-compatible electronic mail program, which runs on your workstation or PC, establishes a connection with the POP server, and detects that new mail, has arrived. From that you can then download the mail to the workstation or computer, and reply to it. In computing world, the Post Office Protocol (POP) used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection is an application-layer Internet standard protocol. The two most prevalent Internet standard protocols for e-mail retrieval are POP and IMAP (Internet Message Access Protocol). All modern e-mail clients and servers support both. With version 3 (POP3) being the current standard, the POP protocol has been developed through several versions. Most webmail service providers also provide IMAP and POP3 service.
DESCRIPTION OF "POP" PROTOCOL
POP (Post Office Protocol)
In the world of snail mail, the sender writes a letter and drops it into a mail box.
When this happens, the letter has just entered a sorting and transportation system where it stays for some amount of time. Eventually the letter ends up with a mail carrier who delivers it into the mail box of the receiver. The receiver opens the mail and, ah, lets it lay on the front hall table until someone throws it away. To find the mail later, the receiver must dig through the trash. If the receiver needs the mail while away from home and the trash, well, too bad. The Post Office Protocol (POP) for delivering email works somewhat similarly to this. A person sends an email from a computer, similar to dropping it into a mail box. An SMTP server routes it across the Internet to an email server, analogous to sorting and transporting. The receiver contacts the email server using POP and downloads the email to a local computer, like the mail carrier delivering a letter. Once on the computer it can be moved from the inbox into a different email folder, so much for the hall table. The mail stays on the computer until the user deletes it.
Besides that, POP supports simple download-and-delete requirements for access to remote mailboxes when most POP clients have an option to leave mail on server after download, e-mail clients using POP can connect, retrieve all messages, store them on the user's PC as new messages, delete them from the server, and then disconnect. Available messages to the client are fixed when a POP session opens the mail drop, and are identified by message-number local to that session or, optionally, by a unique identifier assigned to the message by the POP server. This unique identifier is permanent and unique to the mail drop and allows a client to access the same message in different POP sessions. Mail is retrieved and marked for deletion by message-number. When the client exits the session, the mail marked for deletion is removed from the mail drop..
The Purpose of POP, the Post Office Protocol
If someone sends a message to us it usually cannot be delivered directly to our computer. The message has to be stored somewhere, though. It has to be stored in a place where we can pick it up easily. The internet service provider are operating 24hours and it receives the message for us and keeps it until we download it. For example suppose our email address is [email protected] As our ISP's mail server receives email from the internet it will look at each message and if it finds one addressed to [email protected] that message will be filed to a folder reserved for our mail. This is where the message is kept until either we retrieve it or one of our ISP's administrators finds our account has been filled with spam and decides to delete all the mail in it. Now, POP, the Post Office Protocol is what allows us to retrieve mail from our ISP.
What the Post Office Protocol Allows You to Do
We can do these things with this protocol:.
Of these, the second probably sounds the most dangerous. Deleting something is always frightening. Remember, though, that you retrieve your mail before you delete it and thus have a copy. When your mailbox is full, nobody will be able to send you any email before you haven't cleaned up .If you leave all your mail on the server, it will pile up there and eventually lead to a full mailbox..
The Post Office Protocol (POP) used to retrieve mail from a remote server is a very simple protocol. It defines the basic functionality in a straight forward manner and is easy to implement. Of course, it is also easy to understand. Let's find out what happens behind the scenes when your email program fetches mail in a POP account. First, it needs to connect to the server.
Usually the POP server listens to port 110 for incoming connections. Upon connection from a POP client (your email program) it will hopefully respond with +OK pop.philo.org ready or something similar. The +OK indicates that everything is â€".
After we have successfully logged in to our POP account at the server, we may first want to know if there is new mail at all and then possibly how much.
The.
Now, after finding out whether we have new mail, comes the real real thing. The messages are retrieved one by one with their message number as an argument to the RETR command.
The server responds with an +OK and the message as it is, in multiple lines. The message is terminated by a period on a line by itself. For example:
RETR 1
+OK 2552 octets
Blah! <POP server sends message here>
.
If we try to get a message that does not exist, we get -ERR no such message.
Now we can delete the message using the DELE command. (We can, of course, also delete the message without having retrieved it if it is one of those days).
It is good to know that the server will not purge the message immediately. It is merely marked for deletion. Actual deletion only happens if we regularly end the connection to the server. So no mail will ever be lost if the connection suddenly dies, for example.
The server's response to the DELE command is +OK message deleted:
DELE 1
+OK message 1 deleted
If it is indeed one of those days and we have marked a message for deletion that we do not want to be deleted, it is possible to undelete all messages by resetting the deletion marks. The RSET command returns the mailbox to the state it was in before we logged in.
The server responds with an +OK and possibly the number of messages:
RSET
+OK 18 messages
After we have retrieved and deleted all the messages it is time to say goodbye using the QUIT command. This will purge the messages marked for deletion and close the connection. The server responds with +OK and a farewell message:
QUIT
+OK bye, bye
It is possible that the server was unable to delete a message. Then it will respond with an error like -ERR message 2 not deleted.
HISTORY OF "POP" PROTOCOL
In RFC 918 (1984), POP (POP1) was specified. Then a year later was POP2 .The original specification of POP3 is RFC 1081 (1988). Its current specification is RFC 1939, updated with an extension mechanism, RFC 2449 and an authentication mechanism in RFC 1734.POP2 has been assigned well-known port 109.Intially the original POP3 specification supported only an unencrypted USER/PASS login mechanism. POP3 currently supports several authentication methods to provide varying levels of protection against illegitimate access to a user's e-mail. Most are provided by the POP3 extension mechanisms. Besides that APOP is challenge/response protocol which uses the MD5 hash function in an attempt to avoid replay attacks and disclosure of the shared secret. An informal proposal had been outlined for a "POP4" specification, complete with a working server implementation. This "POP4" proposal added basic folder management, multipart message support, as well as message flag management, allowing for a light protocol which supports some popular IMAP features which POP3 currently lacks. However no progress has been observed in this "POP4" proposal since 2003.
EXAMPLE OF A CLIENT-SERVER COMMUNICATION
A clear example would be e-mail. E-mail, the most popular network service, has revolutionized how people communicate through its simplicity and speed. Hence to run on a computer or other end device, e-mail requires several applications and services. Two example Application layer protocols are Post Office Protocol (POP) and Simple Mail Transfer Protocol (SMTP), shown in the figure. When people compose e-mail messages, they typically use an application called a Mail User Agent (MUA), or e-mail client. The MUA allows messages to be sent and places received messages into the client's mailbox, both of which are distinct processes. In order to receive e-mail messages from an e-mail server, the e-mail client can use POP.
The e-mail server operates two separate processes:
Mail Transfer Agent (MTA)
Mail Delivery Agent (MDA)
The Mail Transfer Agent (MTA) process is used to forward e-mail. As shown in the figure, the MTA receives messages from the MUA or from another MTA on another e-mail server. Based on the message header, it determines how a message has to be forwarded to reach its destination. If the mail is addressed to a user whose mailbox is on the local server, the mail is passed to the MDA. If the mail is for a user not on the local server, the MTA routes the e-mail to the MTA on the appropriate server. In the figure, we see that the Mail Delivery Agent (MDA) accepts a piece of e-mail from a Mail Transfer Agent (MTA) and performs the actual delivery. The MDA receives all the inbound mail from the MTA and places it into the appropriate users' mailboxes. The MDA can also resolve final delivery issues, such as virus scanning, spam filtering, and return-receipt handling.
MDA Mail Delivery Agent MTA Mail Transfer Agent
As mentioned earlier, e-mail can use the protocols, POP and POP and POP3 (Post Office Protocol, version 3) are inbound mail delivery protocols and are typical client/server protocols. They deliver e-mail from the e-mail server to the client (MUA). The MDA listens for when a client connects to a server. Once a connection is established, the server can deliver the e-mail to the client..
Table 3.2: POP2 Commands
Command
Syntax
Function
Hello
HELO user password
Identify user account
Folder
FOLD mail-folder
Select mail folder
Read
READ [n]
Read mail, optionally start with message n
Retrieve
RETR
Retrieve message
Save
ACKS
Acknowledge and save
ACKD
Acknowledge and delete
Failed
NACK
Negative acknowledgement
Quit
QUIT
End the POP2 session
The commands for POP3 are completely different from the commands used for POP2. Table 3.3 shows the set of POP3 commands defined in RFC 1725.
Table 3.3: POP3 Commands
Command
Function
USER username
The user's account name
PASS password
The user's password
STAT
Display the number of unread messages/bytes
RETR n
Retrieve message number n
DELE n
Delete message number n
Display the number of the last message accessed
LIST [n]
Display the size of message n or of all messages
RSET
Undelete all messages; reset message number to 1
TOP n l
Print the headers and l lines of message n
NOOP
Do nothing
QUIT
End the POP3 session.
Example - receiving e-mail using POP3 protocol
/**
* POP3Session - Class for checking e-mail via POP3 protocol.
*/
import java.io.*;
import java.net.*;
import java.util.*;
public class POP3Session extends Object
{
/** 15 sec. socket read timeout */
public static final int SOCKET_READ_TIMEOUT = 15*1000;
protected Socket pop3Socket;
protected BufferedReader in;
protected PrintWriter out;
private String host;
private int port;
private String userName;
private String password;
/**
* Creates new POP3 session by given POP3 host, username and password.
* Assumes POP3 port is 110 (default for POP3 service).
*/
public POP3Session(String host, String userName, String password)
{
this(host, 110, userName, password);
}
/**
* Creates new POP3 session by given POP3 host and port, username and password.
*/
public POP3Session(String host, int port, String userName, String password)
{
this.host = host;
this.port = port;
this.userName = userName;
this.password = password;
}
/**
* Throws exception if given server response if negative. According to POP3
* protocol, positive responses start with a '+' and negative start with '-'.
*/
protected void checkForError(String response)
throws IOException
{
if (response.charAt(0) != '+')
throw new IOException(response);
}
/**
* @return the current number of messages using the POP3 STAT command.
*/
public int getMessageCount()
throws IOException
{
// Send STAT command
String response = doCommand("STAT");
// The format of the response is +OK msg_count size_in_bytes
// We take the substring from offset 4 (the start of the msg_count) and
// go up to the first space, then convert that string to a number.
try {
String countStr = response.substring(4, response.indexOf(' ', 4));
int count = (new Integer(countStr)).intValue();
return count;
} catch (Exception e) {
throw new IOException("Invalid response - " + response);
}
}
/**
* Get headers returns a list of message numbers along with some sizing
* information, and possibly other information depending on the server.
*/
public String[] getHeaders()
throws IOException
{
doCommand("LIST");
return getMultilineResponse();
}
/**
* Gets header returns the message number and message size for a particular
* message number. It may also contain other information.
*/
public String getHeader(String messageId)
throws IOException
{
String response = doCommand("LIST " + messageId);
return response;
}
/**
* Retrieves the entire text of a message using the POP3 RETR command.
*/
public String getMessage(String messageId)
throws IOException
{
doCommand("RETR " + messageId);
String[] messageLines = getMultilineResponse();
StringBuffer message = new StringBuffer();
for (int i=0; i<messageLines.length; i++) {
message.append(messageLines[i]);
message.append("\n");
}
return new String(message);
}
/**
* Retrieves the first <linecount> lines of a message using the POP3 TOP
* command. Note: this command may not be available on all servers. If
* it isn't available, you'll get an exception.
*/
public String[] getMessageHead(String messageId, int lineCount)
throws IOException
{
doCommand("TOP " + messageId + " " + lineCount);
return getMultilineResponse();
}
/**
* Deletes a particular message with DELE command.
*/
public void deleteMessage(String messageId)
throws IOException
{
doCommand("DELE " + messageId);
}
/**
* Initiates a graceful exit by sending QUIT command.
*/
public void quit()
throws IOException
{
doCommand("QUIT");
}
/**
* Connects to the POP3 server and logs on it
* with the USER and PASS commands.
*/
public void connectAndAuthenticate()
throws IOException
{
// Make the connection
pop3Socket = new Socket(host, port);
pop3Socket.setSoTimeout(SOCKET_READ_TIMEOUT);
in = new BufferedReader(new InputStreamReader(pop3Socket.getInputStream()));
out = new PrintWriter(new OutputStreamWriter(pop3Socket.getOutputStream()));
// Receive the welcome message
String response = in.readLine();
checkForError(response);
// Send a USER command to authenticate
doCommand("USER " + userName);
// Send a PASS command to finish authentication
doCommand("PASS " + password);
}
/**
* Closes down the connection to POP3 server (if open).
* Should be called if an exception is raised during the POP3 session.
*/
public void close()
{
try {
in.close();
out.close();
pop3Socket.close();
} catch (Exception ex) {
// Ignore the exception. Probably the socket is not open.
}
}
/**
* Sends a POP3 command and retrieves the response. If the response is
* negative (begins with '-'), throws an IOException with received response.
*/
protected String doCommand(String command)
throws IOException
{
out.println(command);
out.flush();
String response = in.readLine();
checkForError(response);
return response;
}
/**
* Retrieves a multi-line POP3 response. If a line contains "." by itself,
* it is the end of the response. If a line starts with a ".", it should
* really have two "."'s. We strip off the leading ".". If a line does not
* start with ".", there should be at least one line more.
*/
protected String[] getMultilineResponse()
throws IOException
{
ArrayList lines = new ArrayList();
while (true) {
String line = in.readLine();
if (line == null) {
// Server closed connection
throw new IOException("Server unawares closed the connection.");
}
if (line.equals(".")) {
// No more lines in the server response
break;
}
if ((line.length() > 0) && (line.charAt(0) == '.')) {
// The line starts with a "." - strip it off.
line = line.substring(1);
}
// Add read line to the list of lines
lines.add(line);
}
String response[] = new String[lines.size()];
lines.toArray(response);
return response;
}
}
/**
* POP3Session example. Receives email using POP3 protocol.
*/
import POP3Session;
import java.util.StringTokenizer;
public class POP3Client
{
public static void main(String[] args)
{
POP3Session pop3 = new POP3Session("pop.mycompany.com", "username", "password");
try {
System.out.println("Connecting to POP3 server...");
pop3.connectAndAuthenticate();
System.out.println("Connected to POP3 server.");
int messageCount = pop3.getMessageCount();
System.out.println("\nWaiting massages on POP3 server : " + messageCount);
String[] messages = pop3.getHeaders();
for (int i=0; i<messages.length; i++) {
StringTokenizer messageTokens = new StringTokenizer(messages[i]);
String messageId = messageTokens.nextToken();
String messageSize = messageTokens.nextToken();
String messageBody = pop3.getMessage(messageId);
System.out.println(
"\n-------------------- messsage " + messageId +
", size=" + messageSize + " --------------------");
System.out.print(messageBody);
System.out.println("-------------------- end of message " +
messageId + " --------------------");
}
} catch (Exception e) {
pop3.close();
System.out.println("Can not receive e-mail!");
e.printStackTrace();
}
}
}
COMPRASION IMAP v POP
Looking into IMAP and POP, the main difference, as far as we are concerned here, is the way in which IMAP or POP controls our e-mail inbox. When we use IMAP we are accessing our inbox on the U of M's central mail server. IMAP does not actually move messages onto our computer. We can think of an e-mail program using IMAP as a window to our messages on the server. Although the messages appear on our computer while we work with them, they remain on the central mail server.POP does the opposite. Instead of just showing we what is in our inbox on the U's mail server, it checks the server for new messages, downloads all the new messages in our inbox onto our computer, and then deletes them from the server. This means that every time we use POP to view our new messages; they are no longer on the central mail server. Figure below illustrates these concepts.
POP client-server diagram, office computer retrieves new mail, home computer then sees nonePOP client-server diagram, office computer retrieves new mail, home computer then sees none
POP
POP is the oldest and most recognizable Internet email protocol. Its current widespread implementation is POP3. POP is a simple protocol to configure, operate and maintain. When POP was first designed, the cost of constantly staying online was very high. Because of this, POP was built around the offline mail delivery model. This means the end-user connects to an email server, downloads messages, disconnects from the server, then reads email while offline. In other words, POP was designed to collect mail for a single email client. POP Messages are stored on the mail server until downloaded to the client. They are then stored on the client machine and deleted from the server. The client can contain multiple folders for organizing email. Filters can place mail into specific folders during the download process. The user can mark mail with flags such as read, unread and urgent. A change to the POP standard includes the option to leave email on the server after downloading it. This enables a user to download the same mail using multiple clients on more than one computer. However, there are no server-side file manipulation capabilities, such as marking mail as read or unread. There are also no facilities for creating server-side directories. Instead, leaving email on a server, allows each client to download the same messages one time.
POP Benefits
Local Storage - When not connected, the user can still access and read downloaded email.
Server Saving - POP frees server disk space because it downloads emails and attachments then deletes them from the server.
Legacy Systems - For people with older systems, POP may be the only choice.
IMAP is mostly available only on recent email clients, many of which cannot run on older machines.
POP Drawbacks
Single Computer and Client - Despite the "leave-mail-on-server" enhancements of newer POP servers and clients, POP is primarily designed for use with a single email client on a single computer. When implemented, the "leave-mail-on-server" feature forces the downloading of the same emails multiple times, eating bandwidth, server resources and client disk space on multiple machines.
Conclusion
POP and IMAP both offer viable email capabilities.POP is design to be used by one client on one computer. IMAP is client and computer independent with each client seeing the same information for every IMAP account. POP stores mail on the client computer. IMAP stores mail on the server and caches it on the computer. POP clients have facilities for organizing mail into client-side folders. IMAP folders can be on the server or client side. POP sends messages one way, from the server to the client. IMAP can copy and move messages back and forth between mailboxes on multiple accounts as well as between servers and clients. POP allows one user to connect to one mailbox. IMAP supports both private and public folders. Each public folder can have either unique or shared status flags for its messages. | https://www.ukessays.com/essays/computer-science/compare-iso-osi-model-and-stack-computer-science-essay.php | CC-MAIN-2017-17 | en | refinedweb |
OK, so I’ve been using Eclipse more and more. One thing I missed about it while using ASUnit is the missing Create Classes command that you can run from the Flash IDE. This recursively creates AllTests classes in each directory of your package, down to the class you are writing a test for, and creates a TestCase class for the specificy class. Using FAME, you have to write all of these yourself. I did this once, and that was enough. Then I created templates to do it for me after this:
AllTests:
import com.asunit.framework.*; class AllTests extends TestSuite { private var className:String = "AllTests"; public function AllTests() { super(); addTest(new ${package}.AllTests()); // and/or add specific tests here } }
TestCase:
import ${package}.*; import com.asunit.framework.*; class ${package}.${className} extends TestCase { private var className:String = "${package}.${className}"; private var instance:${className}; public function setUp():Void { instance = new ${className}(); } public function tearDown():Void { delete instance; } public function testInstantiated():Void { assertTrue("${className} instantiated", instance instanceof ${className}); } }
Just go to Windows/Preferences/ActionScript 2/Templates. Click “New…”, add name and description and paste the code.
Now, in an .as file, type “TestCase” or “AllTests” and hit control-space. You’ll be prompted for the needed info to finish the class.
I’ll add these to the default templates that ship with ASDT. 🙂 Sure, it shows biased towards AsUnit (over the others), but AsUnit seems to be the most common.
Nice! they are my very first Eclipse templates, so feel free to tweak them if they aren’t quite right.
wow Keith! I didn’t know about templates in Eclipse, really useful stuff. Thans for sharing.
Hey Keith,
are ${className} and ${package} standard template variables? All I can find is ${enclosing_package} and so on.
This means that I need to place my classes under the project root and not in e.g. source/classes/com/company/… or my package will look like source.classes.com.company…. when I use the template in a file.
classname and package are just variables i made up, just to tell you what to put there. You have to manually type it in, but you just do it once and it replaces it througout the file.
Hey Keith,
it personally think that it is better to use the variables eclipse provides. It can save you a lot of time.
Try ${enclosing_type} instead of className for instance. You can find all vars on the bottom left of the template edit screen.
thanks Christophe. just updated them. | http://www.bit-101.com/blog/?p=624 | CC-MAIN-2017-17 | en | refinedweb |
The GPS Toolkit
The GPS Toolkit (GPSTk) is coded entirely in ANSI C++. It is platform-independent and has been built and tested on Linux, Solaris and Microsoft Windows. Everything needed to write standalone, console-based programs is included, along with several complete applications.
The design is highly object-oriented. Everything is contained in the namespace gpstk::. For example, reading and writing a RINEX observation file is as simple as this:
// open, read and re-write a RINEX file using namespace gpstk; // input file stream RinexObsStream rin(inputfile); // output file stream RinexObsStream rout(outputfile, ios::out|ios::trunc); DayTime nextTime; //Date/time object RinexObsHeader head; //RINEX header object RinexObsData data; //RINEX data object // read the RINEX header rin >> head; rout.header = rin.header; rout << rout.header; // loop over all data epochs while (rin >> data) { nextTime = data.time; // change obs data& rout << data; }
The core capability of the library is built around RINEX file I/O. It also includes a complete date and time class to manipulate time tags in GPS and many other formats.
In addition to the RINEX I/O, GPSTk includes classes for handling geodetic coordinates (latitude and longitude) and GPS ephemeris computations. There also is a complete template-based Matrix and Vector package. And, of course, there are GPS positioning and navigation algorithms, including several tropospheric models.
Finally, several standalone programs are included in the distribution. Included are utilities to validate or modify RINEX files, a summary program, a utility to remove or modify observations, a phase discontinuity corrector and a program to compute standard errors and corrections, such as the total electron content (TEC) of the ionosphere along the signal path.
The GPS Toolkit is available for download as a tarball (see the on-line Resources section). To build the toolkit you need to use jam, a replacement for make, and Doxygen, a source code documentation generator. The entire build sequence looks like the following:
tar xvzf gpstk-1.0.tar.gz cd gpstk jam doxygen su jam -sPREFIX=/usr install
This sequence builds and installs the GPSTk dynamic and shared libraries, as well as the header files, in the /usr tree. In addition, a doc subdirectory is created, containing HTML-based documentation of the GPSTk library.
Below are three example applications of the GPSTk created at ARL:UT. The second example actually is distributed as an application with the GPSTk.
Position solutions generated by the GPSTk provide improved precision and robustness compared to those generated by a GPS receiver. Figure 2 illustrates the benefits; each axis extends from –10 to 10 meters.
Plot A shows position computations and how they vary along the East and North directions. Such results are representative of solutions created with a consumer-grade GPS receiver. Plot B shows how the position estimate improves when atmospheric delays are accounted for. Direct processing not only improves precision, but it also increases robustness. Plot C shows the effect of a faulty satellite. The faulty satellite is detected and removed using the GPSTk in Plot D.
An important problem in GPS data processing involves discontinuities in the carrier phase. Before phase data can be used, cycle slips must be found and fixed. The GPSTk distribution includes an application called a discontinuity corrector that does just that. This feature is available in the library as well.
The GPSTk discontinuity corrector works by forming two useful linear combinations of the dual-frequency phase data, called the wide-lane phase bias and the geometry-free phase. An example of these for normal data is shown in Figure 3. The wide-lane bias (red) is noisy but has a constant average. The geometry-free phase does not depend on the receiver-satellite geometry, but it depends strongly on the ionospheric delay. In fact, it is proportional to that delay. Normally, the ionosphere is quiet and smooth, but at times it can be active and rough; then this quantity can vary wildly. The geometry-free phase and the wide-lane noise increase at both ends of the dataset, because the satellite is rising or setting there. Consequently, the signal must travel through more atmosphere.
Figure 4. Slip detected (blue circle) in the wide-lane data (green) where test quantity (dark blue) is larger than limit (magenta).
The discontinuity corrector works by first looking for slips in the wide-lane phase bias; Figure 4 illustrates a case in which it found one. When a slip in the wide-lane slip is found, the code turns to the geometry-free phase and looks for the slip there. To estimate the size of the slip, low-order polynomials are fit to the data on each side of the slip, extrapolated to the point where the slip occurred and then differen | http://www.linuxjournal.com/article/7467?page=0,2&quicktabs_1=1 | CC-MAIN-2017-17 | en | refinedweb |
Hopefully this is my LAST QUESTION!! D:< Anyway, I saw a code on lloydgoodall.com for an LWJGL FPCamera. It had SEVERAL errors, but I finally got to the (hopefully) last error. "Keyboard must be created before you can query key state"
My code:
import org.lwjgl.Sys; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.GL11; import org.lwjgl.input.*; import org.lwjgl.util.vector.Vector3f; import org.lwjgl.input.Keyboard; //First Person Camera Controller public class FPCameraController { //3d vector to store the camera's position in private Vector3f position = null; //the rotation around the Y axis of the camera private float yaw = 0.0f; //the rotation around the X axis of the camera private float pitch = 0.0f; / to its current rotation (yaw) public void walkBackwards(float distance) { position.x += distance * (float)Math.sin(Math.toRadians(yaw)); position.z -= distance * (float)Math.cos(Math.toRadians(yaw)); } /)); } //translates and rotate the matrix so that it looks through the camera //this dose basic what gluLookAt() does public void lookThrough() { //roatate the pitch around the X axis GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f); //roatate the yaw around the Y axis GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f); //translate to the position vector's location GL11.glTranslatef(position.x, position.y, position.z); } public static void main(String[] args) { FPCameraController camera = new FPCameraController(0, 0, 0); float dx = 0.0f; float dy = 0.0f; float dt = 0.0f; //length of frame float lastTime = 0.0f; // when the last frame was float time = 0.0f; float mouseSensitivity = 0.05f; float movementSpeed = 10.0f; //move 10 units per second //hide the mouse Mouse.setGrabbed(true); // keep looping till the display window is closed the ESC key is down /* while (!Display.isCloseRequested() && !Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { */ time = Sys.getTime(); dt = (time - lastTime)/1000.0f; lastTime = time; //distance in mouse movement from the last getDX() call. dx = Mouse.getDX(); //distance in mouse movement from the last getDY() call. dy = Mouse.getDY(); //controll camera yaw from x movement fromt the mouse camera.yaw(dx * mouseSensitivity); //controll camera pitch from y movement fromt the mouse camera.pitch(dy * mouseSensitivity); //when passing in the distrance to move //we times the movementSpeed with dt this is a time scale //so if its a slow frame u move more then a fast frame //so on a slow computer you move just as fast as on a fast computer if (Keyboard.isKeyDown(Keyboard.KEY_W))//move forward { camera.walkForward(movementSpeed*dt); } if (Keyboard.isKeyDown(Keyboard.KEY_S))//move backwards { camera.walkBackwards(movementSpeed*dt); } if (Keyboard.isKeyDown(Keyboard.KEY_A))//strafe left { camera.strafeLeft(movementSpeed*dt); } if (Keyboard.isKeyDown(Keyboard.KEY_D))//strafe right { camera.strafeRight(movementSpeed*dt); } //set the modelview matrix back to the identity GL11.glLoadIdentity(); //look through the camera before you draw anything camera.lookThrough(); //you would draw your scene here. //draw the buffer to the screen Display.update(); } }
Uhh... please help : D | https://www.daniweb.com/programming/software-development/threads/322166/hopefully-my-final-question-d-keyboard-must-be-created-before-you-can-query-key-s | CC-MAIN-2017-17 | en | refinedweb |
from SOAPpy import WSDL
url = 'yoururl'
# just use the path to the wsdl of your choice
wsdlObject = WSDL.Proxy(url + '?wsdl')
print 'Available methods:'
for method in wsdlObject.methods.keys() :
print method
ci = wsdlObject.methods[method]
# you can also use ci.inparams
for param in ci.outparams :
# list of the function and type
# depending of the wsdl...
print param.name.ljust(20) , param.type
If you want SOAPpy to authenticate you should simply put url like :
''
etc. SOAPpy internally uses urllib. It should be able to handle that.
However SOAPpy cannot authenticate method calls. You can make it authentication aware by reading more about it here:
Also, since only manually testing revealed this. SOAPpy (version : 0.12) does not handle soap attachments (you can't use it as it is with Jasper). Had to test it to find out. ZSI does. | http://basaratali.blogspot.com/2009_03_01_archive.html | CC-MAIN-2017-17 | en | refinedweb |
Search
Create
How can we help?
You can also find more resources in our
Help Center
.
Select a category
Something is confusing
Something is broken
I have a suggestion
Other feedback
What is your email?
What is 1 + 3?
Send Message
Advertisement
Upgrade to remove ads
46 terms
Zach9208
Economics Last Exam
STUDY
PLAY
The addition of government to the circular-flow model illustrates that government:
-purchases resources in the resource market.
-provides services to businesses and households.
-purchases goods in the product market.
does all of these.
The opportunity cost of borrowing funds to finance government deficits is:
greatest when the economy is doing well.
The marginal tax rate is:
the percentage of an increment of income that is paid in taxes.
One difference between sales and excise taxes is that:
sales taxes are calculated as a percentage of the price paid, while excise taxes are levied on a per-unit basis.
The Federal gasoline tax is assessed on a per-gallon basis and the proceeds are used for highway maintenance and improvements. This tax is consistent with the:
benefits-received principle of taxation.
In 2006, the top 1 percent of all taxpayers in the United States paid what percent of the Federal income tax?
39.1 percent
In which of the above market situations will the largest portion of an excise tax of a specified amount per unit of output be borne by producers?
3
Which of the following taxes is most likely to be shifted?
a general sales tax
The efficiency loss of a tax is:
the net value of sacrificed output caused by the tax.
(Consider This) Proponents of a value-added tax (VAT) claim that a VAT:
penalizes consumption and encourages savings and investment.
Where there is asymmetric information between buyers and sellers.
markets can produce inefficient outcomes.
Upon buying a car with airbags, Indy begins to drive recklessly. This is an example of the:
moral hazard problem.
An economic analysis of the relationship between proposed legislation affecting major employers in each state and the voting patterns of Senators and representatives in Congress on that legislation would fit within the subcategory of economics called:
public choice theory.
Answer the next question on the basis of this table showing the marginal benefit a particular public project will provide to each of the three members of a community. No vote trading is allowed.
If the tax cost of this proposed project is $600 per person, a majority vote will:
pass this project and resources will be overallocated to it.
The median-voter model implies that:
many people will be dissatisfied with the size of government in the economy.
Answer the next question(s) on the basis of the following table that shows the total costs and total benefits facing a city of five different potential baseball stadiums of increasing size. All figures are in millions of dollars.
Refer to the above table. Based on cost-benefit analysis, the city should:
build stadium D.
"Vote for my special local project and I will vote for yours." This political technique:
often accompanies pork-barrel politics.
When congressional representatives vote on an appropriations bill, they must vote yea or nay, taking the bad with the good. This statement best reflects the:
idea of limited and bundled choice.
(Consider This) Subsidies for mohair production illustrate:
why special-interest effects are often characterized by concentrated benefits and diffuse costs.
(Last Word) In their effort to provide disaster relief after Hurricane Katrina, the Federal Emergency Management Agency (FEMA) made payouts on as many as 900,000 claims with invalid Social Security numbers or false names and addresses. This example illustrates:
bureaucracy inefficiency.
Which of the following is least likely to violate the Sherman Act or the Clayton Act?
Competitive firms F and G independently charge lower prices to frequent customers than to occasional customers.
Tying agreements:
obligate a purchaser of product X to also buy product Y from the same seller.
The antitrust laws are based on the:
idea that competition leads to greater economic efficiency than does monopoly.
The "rule of reason" indicated that:
only contracts and combinations that unreasonably restrain trade violate the antitrust laws.
The Alcoa case:
supported the structuralist approach to antitrust.
Which one of the following is most likely to increase the Herfindahl index of a particular industry?
a horizontal merger
Critics of the regulation of natural monopolies contend that:
the industry may "capture" or control the regulatory commission.
Overall, economists believe that deregulation of industries formerly subjected to industrial regulation:
has produced large net benefits for consumers and society.
Defenders of social regulation point out that:
critics who stress the high administrative and compliance costs of social regulation underestimate the social benefits that the regulations produce.
(Consider This) The Consider This box "Of Catfish and Art (and Other Things in Common)" lists examples of recent antitrust cases involving:
price fixing.
The United States' most important trading partner quantitatively is:
Canada
Answer the next question(s) on the basis of the following production possibilities data for Gamma and Sigma. All data are in tons.
Gamma's production possibilities:
On the basis of the above information:
Gamma should export tea to Sigma and Sigma should export pots to Gamma.
Refer to the above diagrams. The solid lines are production possibilities curves; the dashed lines are trading possibilities curves. The data contained in the production possibilities curves are based on the assumption of:
constant costs.
Answer the next question(s) on the basis of the following production possibilities data for two countries, Alpha and Beta, which have populations of equal size.
Refer to the above data. The domestic opportunity cost of:
producing a ton of chips in Beta is 6 tons of fish.
Refer to the above diagram pertaining to two nations and a specific product. Lines FC and GD are:
import demand curves for two countries.
Country A limits other nation's exports to Country A to 1,000 tons of coal annually. This is an example of a(n):
import quota.
Suppose the United States eliminates high tariffs on German bicycles. As a result, we would expect:
employment to decrease in the U.S. bicycle industry.
Studies show that:
costs of trade barriers exceed their benefits, creating an efficiency loss for society.
As it relates to international trade, dumping:
is the practice of selling goods in a foreign market at less than cost.
(Consider This) According to Dallas Federal Reserve economist W. Michael Cox, taken to its extreme, the logic of "buying American" implies that:
people should only consume what they can produce themselves.
Government's role in the circular flow model shows how it can alter distribution of income, reallocate resources, and change the level of economic activity in the economy.
True
Local governments rely heavily of which of the following taxes?
Property tax
A tax that has its average rate decline as income increases is called a
regressive tax.
In general,
the more inelastic the demand, the more the sales tax is paid by the consumers.
If the elasticity of demand is 1 and the elasticity of supply is 0.45, the tax will be paid mostly by consumers.
False
Before the imposition of a tax, the demand curve equation was P = 10 - 0.2Q and the supply curve equation was P = 1 + 0.01Q. Quantity is in units and price is in dollars.
After a $3 per unit tax is imposed on buyers, what is the total amount (including the tax) that buyers will pay for the good?
$6
Advertisement
Upgrade to remove ads | https://quizlet.com/17304248/economics-last-exam-flash-cards/ | CC-MAIN-2017-22 | en | refinedweb |
[ ]
james strachan commented on AMQ-340:
------------------------------------
Its been a while - I've kinda forgotten :)
I think the idea was to allow different 'roots'. By default in JMS there is one global topic
namespace where > will receive every message. In WS-Notification you can have many 'root's.
e.g. its a bit like having a topic which is owned by a particular broker.
So I guess its more about having optional 'owners' of the topic namespace - so it could be
a global foo.bar or could be foo.bar within the 'cheese' domain which might be owned by a
particular broker
> allow topics in particular but also queues to have a 'namespace URI' like WS-Notification
> -----------------------------------------------------------------------------------------
>
> Key: AMQ-340
> URL:
> Project: ActiveMQ
> Type: New Feature
> Reporter: james strachan
> Fix For: 4.1
>
>
> This would allow a real clean mapping from WS-N topics and ActiveMQ at the protocol level.
We could use the namespace as a level of indirection to map to a broker, a cluster of brokers
or even a particular area of a network etc. The namespace could be a broker's name too.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
-
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/activemq-dev/200607.mbox/%3C2120126.1151924271507.JavaMail.jira@brutus%3E | CC-MAIN-2017-22 | en | refinedweb |
Get-NetDnsTransitionMonitoring
Get-NetDnsTransitionMonitoring
Syntax
Detailed Description
The Get-NetDnsTransitionMonitoring cmdlet retrieves operational statistics for the DNS64, including the number of successful queries and number of failed queries.DnsTransitionMonitoring
The
Microsoft.Management.Infrastructure.CimInstanceobject is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign (
#) provides the namespace and class name for the underlying WMI object.
The MSFT_NetDnsTransitionMonitoring object contains DNS64 monitoring information.
Examples
EXAMPLE 1
This example retrieves the DNS64 monitoring information.
Related topics | https://technet.microsoft.com/en-us/library/jj613674(v=wps.620).aspx | CC-MAIN-2017-22 | en | refinedweb |
Logging is one of the important aspects of many software systems. It can be used, for example, to
The present article describes an implementation of a lossless log4net appender that sends complete logging information to a centralized data service where structured data having identical format from various parts of a distributed system are saved without making any assumption as to how these data are going to be used at the first stage. The second stage involves client software making use of the data by taking advantage of the intelligent query API (see here) and real-time notification framework of the data service.
It introduces one of the real-time notification APIs of the logging data service using which monitoring client software or agents can get notification of data changes of interest and respond to the received events according to the needs of the system in real-time.
log4net is one of the well established logging frameworks that .NET applications can adopt. It has a quite flexible configuration system that one can use to inject loggers into selected parts of the system (here is a tutorial). It is also easy to extend. The present article takes advantage of the said extensibility to build a service based relational logging data system.
The structured log items produced by the log4net contain rich information about the execution environment in which each log item is produced. For example, it can use the .Net diagnostic feature to extract a complete set of stack frames at the point where the logging is done. This information is important to know in various debugging or profiling scenarios.
The default appenders included in the log4net and most of the openly available custom ones follow a traditional strategy of outputting only a particular view (or layout) of the said data, text oriented in most of the time, that many important meta-information are projected out in one way or another before "seeing the light of the day". The lost information after projection will not be available to post logging or real-time subscriptive analysis. In addition a change in projection strategy later could render earlier data to be inconsistent with later ones in format and in information content. This could cause problems to automated analysis or processing tools.
Web applications are intrinsically multi-user, multi-thread and multi-process, and most likely distributed, nowadays. They progress in parallel at the same time. The new asynchronous programming pattern of recent .Net framework makes the situation "worse" since an async method could contain sections of code that are executed on different threads in a pool, scheduled by the framework, that appears to be randomly selected. A executing job is consisted of logically connected sequence of operations, forming a line of execution. These logically sequential lines of execution will be generally referred to as "logical threads" in the following. A centralized log system for these kind of systems contain interleaving entities from various parallel logical threads. As the number of threads increases, they can quickly become too complex to be identified and disentangled using traditional means (e.g., relying on our eyes and brain) to do analysis without using accurate, most likely structured, querying tools.
The following figure is snapshot of a file based log for just a one user ("user-a") in just one range based parallel download session, created by a capable download client software for a "file" of around 100 MB saved in a data service. The download is handled on the web application using .Net async programming framework. Here the number in [...] is the thread id and the number following # is a sequence number for a data block. A logical thread can be identified to be formed by records having a consecutively increasing sequence number. As it can be seen from the figure, they are mixed in an seemingly random way.
Figure: Interleaving log records where physical threads and logical threads are mixed.
For example, the sequence #1,#2,#3 ... of data blocks are handled by managed threads [24],[13],[8] ... despite the fact that all of them (including other sequences or logical threads) are processed in a loop of a async method.
The present solution has three main features:
Real-time monitoring of a system has many advantages, especially for those events that are considered critical. Critical events has many kinds. There are design time expected, well understood ones that are handled by the system itself and exceptional ones that are either un-expected, less understood or too complicated to handle by the system itself at the time the system is built. They need to be either throw away or in a better designed system, delegated to an extensible end-point, like a logging system, that may or may not has external handlers.
There are other real-time exception monitoring solutions like the ElmahR project for visually monitoring of uncaught exceptions of a system. But the present solution is more general. It not only can monitor uncaught exceptions, but also monitor any log data streams of interest base on user provided filter expressions. In addition, it is more extensible due to the programmable API, which make it possible for the monitoring agents to act on received events automatically.
Depending on its usage, logging information could be saved to different data stores. Some of the logging information could be saved to the data sets that belongs to the main database of the system, some of others, like the debug information during development, could be saved to more temporary ones that can be discarded after the particular debug task is completed. In addition, log4net allows saving the same log items to multiple stores as well.
What is shown in the following contains only the relevant data sets for the present article. They should be attached to other parts of the data source using a foreign key, namely, the AppID property, that points to an entity in a data set which contains a collection of applications. For simplicity, the data service included in this article is extended from the simple ASP.NET membership data source described in here, and here. There is no particular reasons for such a choice other than that it can save our project administration complexity by not creating another independent demo data service project.
AppID
The present implementation of the part of the data schema responsible for logging is consisted of three data sets, namely EventLogs, EventLocations and EventStackFrames. Their dependency on each other is schematically shown in the following:
EventLogs
EventLocations
EventStackFrames
Figure: New data sets attached to an existing relational data set.
Here an entity in EventLogs data set represents the main log item. It has a one to one relationship with a corresponding entity in EventLocations data set, which contains the location information of the place (line number) in code (file name) where the corresponding logging happened. An entity in EventLocations data set has one to many relationship with a sub-set of entities in EventStackFrames data set. The said sub set of entities represents a method invoking path that leads to the position in code at which the logging item is produced. These information are sometimes very important to have in debugging problems and/or tuning the performance of the system.
EventLocations
EventStackFrames
The following is the details of data schema for each of the above mentioned sets:
Figure: Data schema for the EventLogs data set.
Figure: Data schema for the EventLocations data set.
Figure: Data schema for the EventStackFrames data set.
The included logging data service is produced according to an extended data schema derived from the above definitions.
It is found that the native log4net log entity type LoggingEvent in log4net.Core namespace does not contain enough meta-information for us to describe the execution flow of a web application. It is extended to include more, which is encoded into the class
LoggingEvent
log4net.Core
internal class LoggingEventWrap
{
public LoggingEvent Evt;
public string webUser;
public string pageUrl;
public string referUrl;
public string requestId;
}
Here Evt is the native log4net log entity and the rest is a minimum set of information that is relevant to a multi-user web application. They are obtained from a set of globally registered value providers for log4net, when it is possible:
Evt
string webUserName = GlobalContext.Properties["user"].ToString();
string pageUrl = GlobalContext.Properties["pageUrl"].ToString();
string referUrl = GlobalContext.Properties["referUrl"].ToString();
string requestId = GlobalContext.Properties["requestId"].ToString();
which is registered inside of the Global.asax.cs file of the ASP.NET application
Global.asax.cs();
....
}
These providers are defined inside the HttpContextInfoProvider.cs file of the project. For example
HttpContextInfoProvider.cs
public class HttpContextUserNameProvider
{
public override string ToString()
{
HttpContext c = HttpContext.Current;
if (c != null)
{
if (c.User != null && c.User.Identity.IsAuthenticated)
return c.User.Identity.Name;
else
return c.Request != null && c.Request.AnonymousID != null ?
c.Request.AnonymousID : "Request from Unknown Users";
}
else
{
if (Thread.CurrentPrincipal.Identity.IsAuthenticated)
return Thread.CurrentPrincipal.Identity.Name;
else
return "Request from Unknown Users";
}
}
}
The purpose of it is to get a unique id for the current user, authenticated or not so that log records from different users can be identified and separated for whatever purposes. In cases where HttpContext.Current is available, it is used to extract web-related information. For example when the user is authenticated, it returns the user name, otherwise if the web application is configured to enable anonymous identification of un-authenticated visitors, it returns the AnonymousID of the visitor to distinguish him/her from other visitors.
HttpContext.Current
AnonymousID
Note: AnonymousID is not automatically enabled for a ASP.NET application, one has to add the following node under the <system.web> inside of the Web.config file, namely
<system.web>
Web.config
<anonymousIdentification
enabled="true"
cookieless="UseCookies"
cookieName=".ASPXANONYMOUS"
cookieTimeout="30"
cookiePath="/"
cookieRequireSSL="false"
cookieSlidingExpiration="true"
cookieProtection="All"
/>
HttpContext.Current is not always available in an ASP.NET application. For example if a request is handled by Web API or SignalR channels, then HttpContext.Current is not available. In that case, the generic property Thread.CurrentPrincipal.Identity is used when the visitor is authenticated. However, it seems that there is not a mechanism for identifying anonymous users when HttpContext.Current is not available
Thread.CurrentPrincipal.Identity
The entity graph to be sent to the data service is built from an instance of the LoggingEventWrap class:
LoggingEventWrap
private static EventLog getEntity(LoggingEventWrap evtw)
{
EventLog log = new EventLog();
log.ID = Guid.NewGuid().ToString();
log.AppAgent = evtw.Evt.UserName;
log.AppDomain = evtw.Evt.Domain;
log.AppID = App != null ? App.ID : null;
log.EventLevel = evtw.Evt.Level.Name;
if (evtw.Evt.ExceptionObject != null)
{
log.ExceptionInfo = excpetionToString(evtw.Evt.ExceptionObject);
//it's important to turn this on for delay loaded properties
log.IsExceptionInfoLoaded = true;
}
log.LoggerName = evtw.Evt.LoggerName;
TracedLogMessage tmsg = null;
if (evtw.Evt.MessageObject is TracedLogMessage)
{
tmsg = evtw.Evt.MessageObject as TracedLogMessage;
log.Message_ = tmsg.Msg;
log.CallTrackID = tmsg.ID;
}
else if (evtw.Evt.MessageObject is string)
log.Message_ = evtw.Evt.MessageObject as string;
else
log.Message_ = evtw.Evt.RenderedMessage;
log.ThreadName_ = evtw.Evt.ThreadName;
log.ThreadPrincipal = evtw.Evt.Identity;
log.TimeStamp_ = evtw.Evt.TimeStamp.Ticks;
log.Username = evtw.webUser == null ? evtw.Evt.UserName : evtw.webUser;
log.PageUrl = evtw.pageUrl;
log.ReferringUrl = evtw.referUrl;
if (tmsg == null)
log.RequestID = evtw.requestId;
if (evtw.Evt.Level >= Level.Debug && evtw.Evt.LocationInformation != null &&
_recordStackFrames)
{
log.ChangedEventLocations = new EventLocation[]
{
getLocation(log.ID, evtw.Evt.LocationInformation)
};
}
return log;
}
private static EventLocation getLocation(string id, LocationInfo loc)
{
EventLocation eloc = new EventLocation();
eloc.EventID = id;
eloc.ClassName_ = loc.ClassName;
// 220 is the current FileName_ size.
eloc.FileName_ = loc.FileName != null && loc.FileName.Length > 220 ?
"..." + loc.FileName.Substring(loc.FileName.Length - 220 - 3) :
loc.FileName;
eloc.MethodName_ = loc.MethodName;
eloc.LineNumber = loc.LineNumber;
if (loc.StackFrames != null && loc.StackFrames.Length > 0)
{
List<EventStackFrame> frames = new List<EventStackFrame>();
int frmId = 1;
foreach (var frm in loc.StackFrames)
{
if (_maxStackFramesUp >= 0 &&frmId > _maxStackFramesUp)
break;
else if (_userStackFramesOnly && string.IsNullOrEmpty(frm.FileName))
continue;
EventStackFrame efrm = new EventStackFrame();
efrm.EventID = id;
efrm.ID = frmId++;
efrm.ClassName_ = frm.ClassName;
// 220 is the current FileName_ size.
efrm.FileName_ = frm.FileName != null && frm.FileName.Length > 220 ?
"..." + frm.FileName.Substring(frm.FileName.Length - 220 - 3) :
frm.FileName;
string callinfo = frm.Method.Name + "(";
foreach (var p in frm.Method.Parameters)
callinfo += p + ", ";
callinfo = callinfo.TrimEnd(", ".ToCharArray()) + ")";
efrm.MethodInfo = callinfo;
frames.Add(efrm);
}
eloc.ChangedEventStackFrames = frames.ToArray();
}
return eloc;
}
As it is described in a few other related articles, in order to have a set of entities that depend on another entity to be updated together with the later entity, the dependents must be put into the
"Changed" + <dependent entity name> + "s"
property of the said entity. In our case, because EventLog entity has a set of dependent entities (only one here since the relation is one to one) of type EventLocation, EventLog entity must has a property named ChangedEventLocations whose type is EventLocation[]. Entities of type EventLocation that depend on the said entity should be put into the property ChangedEventLocations in order for them to be updated at the data service, together with the said entity. Likewise, entity of type EventLocation has a set of dependent entities of type EventStackFrame, so its corresponding set of dependent entities of type EventStackFrame, should be put into the ChangedEventStackFrames property of the entity in order for them to be updated at the data service, together with the said entity. The above two methods are used to build such an entity tree that a client can use to insert the corresponding data graph in just one call to the data service, which is able to handle primary keys, foreign key constraints and data duplications (due to the tree structure).
EventLog
EventLocation
ChangedEventLocations
EventLocation[]
EventStackFrame
ChangedEventStackFrames
An implementation of the log4net appender can be started either from the IAppender interface or from the AppenderSkeleton class. Both of them are defined inside of the log4net package under the namespace log4net.Appender. We choose the later one here to make thing simpler since some of the standard jobs are already handled inside of the AppenderSkeleton class. All that is needed to do is implement the overloaded methods Append(LoggingEvent evt) and optionally Append(LoggingEvent[] evts).
IAppender
AppenderSkeleton
log4net.Appender
Append(LoggingEvent evt)
Append(LoggingEvent[] evts)
Simple as it is, there are other concerns that we must consider when dealing with a remote data service. One of them is the fact that calling a remote data service takes more time to complete than performing local IO operations. log4net itself uses method implemented inside of the BufferingAppenderSkeleton class under the log4net.Appender namespace for such kind of appenders. It has a few issues in our application scenarios. So we would like to develop our own mechanism.
BufferingAppenderSkeleton
log4net.Appender
A mean must therefore be devised to hide or, if not possible at all, at least delay such effects as much as possible so that it should appears to have little actual effects in most application scenarios. One of such a scenario is when the system does not log too many items on average per unit of time to exceed the data throughput of the network and the data service, but could have temporary burst of large quantity of log items. In this case the present appender could appears to be faster than or at least as fast as many of the local log appenders.
To this end, we choose to use an asynchronous mechanism to do the data service updating in a producer-consumer setting. Its implementation, although quite standard, can be a little involved. However .NET framework already provide a ready to use one. More specifically, a bounded BlockingCollection instance from the System.Collections.Concurrent namespace can be used to hold the log items when the above mentioned Append methods are called, which acts as the producer.
BlockingCollection
System.Collections.Concurrent
Append
producer
private static BlockingCollection<EventLog> EventQueue
{
get
{
return _eventQueue ?? (_eventQueue = new BlockingCollection<EventLog>(_maxQueueLength));
}
}
private static BlockingCollection<EventLog> _eventQueue = null;
It hold a collection of entity graphs of type EventLog to be sent to the data service. It takes very little time to add items to the list as long as the capacity of the said instance has not been reached.
protected override void Append(LoggingEvent evt)
{
SetupThread();
string webUserName = GlobalContext.Properties["user"].ToString();
string pageUrl = GlobalContext.Properties["pageUrl"].ToString();
string referUrl = GlobalContext.Properties["referUrl"].ToString();
string requestId = GlobalContext.Properties["requestId"].ToString();
var evtw = new LoggingEventWrap {
Evt = evt,
webUser = webUserName,
pageUrl = pageUrl,
referUrl = referUrl,
requestId = requestId
};
if (!_lossy)
EventQueue.Add(getEntity(evtw));
else
EventQueue.TryAdd(getEntity(evtw));
}
Method getEntity is used to build an entity graph to be sent to the data service, which is described in the previous sub-section. It handles the addition differently if there are still _maxQueueLength items waiting to be sent to the data service, which could happen when the speed of logging is too fast to be handled by the data service, depending upon the value of _lossy specified in the configuration. When _lossy is false, the method will be blocked at EventQueue.Add(getEntity(evtw)) statement until some items are sent to the data service, otherwise, the new log item will be dropped and the method will return immediately.
getEntity
_maxQueueLength
_lossy
false
EventQueue.Add(getEntity(evtw))
At the same time, a background thread, which acts as the consumer, is created to do the data service update in an infinite loop, waiting for the next available items after creation or after finishing updating the last block of log items.
consumer
private static void EventProcessThread()
{
List<EventLog> block = new List<EventLog>();
while (!stopProcessing)
{
EventLog e;
while (!EventQueue.TryTake(out e, 300))
{
if (block.Count > 0)
sendBlock(block);
if (stopProcessing)
break;
}
block.Add(e);
if (block.Count >= _maxUpdateBlockSize)
sendBlock(block);
}
_thread = null;
}
What it does is it builds a local list of items to be updated:
sendBlock
_maxUpdateBlockSize
EventQueue.TryTake(out e, 300)
block
stopProcessing
The sendBlock method is:
private static void sendBlock(List<EventLog> block)
{
try
{
var svc = GetService();
svc.AddOrUpdateEntities(ClientContext.CreateCopy(),
new EventLogSet(),
block.ToArray());
}
catch (Exception ex)
{
log4net.Util.LogLog.Warn(typeof(DataServiceAppender),
excpetionToString(ex));
}
finally
{
block.Clear();
}
}
Real-time monitoring is achieved by the push notification mechanism and interfaces opened by the data service.
Two kinds of push notification end points are supported by the data service:
SignalR notifications: It does not have a service side filtering mechanism yet so all logging events are pushed to the client, making it quite noisy and less performant. It is also less reliable so it is not recommended to use for our purposes here.
This scalable channel can be used for less reliable end user side visual display of push notifications, like on a web-pages. It is not enabled by default. One has to set
<publish name="EventLog" disabled="false" />
under the cud-subscriptions/clients node inside of the Web.config file of the data service. After the setting, the data service will broadcast change notifications on the NotificationHub channel. The following statement sequence is needed to set up the client monitor:
cud-subscriptions/clients
Web.config
NotificationHub
var hubConn = new HubConnection(url);
hubProxy = hubConn.CreateHubProxy("NotificationHub");
hubConn.Start().Wait();
hubProxy.Invoke("JoinGroup", EntitySetType.EventLog.ToString()).Wait();
hubProxy.On<dynamic>("entityChanged", (e) => {
... handle the event ...
});
WCF callbacks: This one is more reliable and is mainly used for server to server notifications inside of a certain security boundary. It can be accurately controlled inside of a client software. Therefore it is more suitable to be used as machine to machine push notification means inside of certain security boundary.
There is one entry point to subscribe to and receive data change notifications for each instance of a type of data source. They are done through an instance of the
<data source name> + "DuplexServiceProxy"
where <data source name> is the name of the data source, which in the present case is AspNetMember, that is supporting the kind of logging that requires to be performed. There could be multiple ones at the same time.
<data source name>
AspNetMember
A notification or callback handler class must implement the IServiceNotificationCallback interface defined inside of the Shared library of the logging data service.
IServiceNotificationCallback
Shared
[CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, UseSynchronizationContext = false)]
public class CallbackHandler : IServiceNotificationCallback
{
... other members of the class ....
public void EntityChanged(EntitySetType SetType, int Status, string Entity)
{
if ((Status & (int)EntityOpStatus.Added) != 0)
{
switch (SetType)
{
case EntitySetType.EventLog:
{
var ser = new DataContractJsonSerializer(typeof(EventLog));
byte[] bf = Encoding.UTF8.GetBytes(Entity);
MemoryStream strm = new MemoryStream(bf);
strm.Position = 0;
var e = ser.ReadObject(strm) as EventLog;
... handle the entity ...
}
break;
// case for other data sets, if interested...
}
}
}
... other members of the class ....
}
To subscribe to the data change inside of the data service, one should execute the following statements somewhere in the application, where AspNetMember data source is taken for example:
var _handler = new CallbackHandler();
var _notifier = new InstanceContext(_handler);
_notifier.Closing += ... channel closing event handler
... other _notifier related event handler ...
var svc = new AspNetMemberDuplexServiceProxy(_notifier);
Here CallbackHandler is the class defined above. One should construct the service side filter expression as the next step. Setting up service side filters is the preferred way of filtering push back events since it could increase performance significantly. Let's suppose that the monitor agent is interested in handling ERROR or FATAL level of log events, then one should construct the following filter expression:
CallbackHandler
var qexpr = new QueryExpresion
{
FilterTks = new List<QToken>()
};
qexpr.FilterTks.Add(new QToken
{
TkName = "EventLevel == ERROR || EventLevel == FATAL"
});
If a developer is not familiar with how such expressions can be constructed, he/she can read the introduction in some of our previous articles (see, e.g. here and here). Of course different monitoring agents may be interested in different aspects of the log events. They must use the corresponding filter with a distinct subscription identity (see the following) to do the subscription. After specifying the filter, the subscription is realized by:
var sub = new SetSubscription
{
EntityType = EntitySetType.EventLog,
EntityFilter = qexpr
};
svc.SubscribeToUpdates(cctx, OwnerID, SubscribeID, new SetSubscription[] { sub });
where cctx is the global instance of type CallContext for the logging data service, OwnerID is the owner identity of the subscription used to maintain the subscription, and SubscribeID is an ID that the client software needs to keep track of to manage its subscriptions. The last one, sub, specifies the type of data set to be monitored and the corresponding filter expression. If there are more than one data sets to be monitored, just construct the corresponding one for each of them in the same way. Here we are interested in EventLog data set only. Subscription can be changed or unsubscribed only by its owner. There can be only one subscription for each subscription ID, so in order not get into conflict with other subscribers of the system or trying to change an owner's subscription unintentionally, a GUID value is recommended. If the subscription is global to a particular application, then one can use, e.g., the ClientID property of the global instance of CallContext returned by the logging data service after successful SignInService during initialization of the system (see here) as both the OwnerID and SubscribeID.
cctx
CallContext
OwnerID
SubscribeID
sub
ClientID
OwnerID
When the subscription is no longer needed for some reasons, it is recommended to unsibscribed it. The following statement can be used to do the un-subscription:
var svc = new AspNetMemberDuplexServiceProxy();
svc.UnsubscribeToUpdates(cctx, OwnerID, <span lang="zh-cn">SubscriberID)</span>;
One can follow instructions given here to set up the demo log data service. Note that the data service must be run under .NET 4.5.1.\Documents\ClientAPI45\
Documents\ClientAPI45
Since the current version of log4net which is 1.2.13, is compiled under .NET 4.0. To be consistent, one should recompile it under the current .NET framework, that is .NET 4.5.1, from its source code available here. The newly compiled log4net assembly should be referenced instead of the downloaded one (from i.e. nuget.org). There is no problem found in switch the targeting framework so far.
What if a user's system is still based on .NET 4.0 and he/she still would like to use the logging data service? There is also no problem neither. What is needed to be done is to re-target the "Shared" and "Proxy" projects for the data service to .NET 4.0 and then delete (turn off) the "SUPPORT_ASYNC" conditional compilation flag. Next, he/she should reference the changed projects from his/her main projects, like what is done here.
There are many tutorials on how to use log4net in a .NET application on the web, for example here is one on the CodeProject. The demo application included in the present article uses a configuration file, named Web.log4net, for log4net that is outside of the Web.config file. The following line should be added to the AssemblyInfo.cs file:
Web.log4net
AssemblyInfo.cs
[assembly: log4net.Config.XmlConfigurator(ConfigFile = "Web.log4net", Watch = true)]
To use the current log4net appender, one must initialize it first.
First, make a reference to the appender project from the web application, then add the following lines to the Global.asax.cs file:
Add namespace reference:
using log4net;
using Archymeta.Web.Logging;
Add ASP.NET parameter provider:();
... other initialization steps
}
Initialize calls to the logging data service. Add the following lines after initializing the CallContext for the logging data service
DataServiceAppender.App = App;
DataServiceAppender.ClientContext = ClientContext.CreateCopy();
inside of the Startup.Auth.cs file (see here). Note: if the logging data service is different from the main data service of the application, then App and ClientContext variables must be initialized separately, for the logging data service using a different name from the main data service.
Startup.Auth.cs
App
ClientContext
Tracking a user. In some analysis or view scenarios, the activity of a single user needs to be separated out from the logging records. If this is the case, the log item must be tagged with user identification information. The HttpContextUserNameProvider which is
HttpContextUserNameProvider
is used for that purposes. User identification information can be easily extracted from authenticated users, as it is shown above. However, this is not always possible for unauthenticated ones. There is an cookie based user anonymous ID for ASP.NET applications that can be used to identify a user, at least for certain amount of time span. However, such information is not yet available for more recent additions to the ASP.NET framework, like in the Web API channel or in the SignalR channel, etc.
For traditional ASP.NET application, user anonymous ID is not enabled by default. One must add the following node
<anonymousIdentification
enabled="true"
cookieless="UseCookies"
cookieName=".ASPXANONYMOUS"
cookieTimeout="30"
cookiePath="/"
cookieRequireSSL="false"
cookieSlidingExpiration="true"
cookieProtection="All"
/>
under the <system.web> node of the Web.config file. Of course most of the parameters provided should be modified to suit the needs of a particular application. The anonymous ID of an unauthenticated user is also not generated automatically. The following handler should be added to the Global.asax.cs file:
<system.web>
public void AnonymousIdentification_Creating(Object sender, AnonymousIdentificationEventArgs e)
{
e.AnonymousID = Guid.NewGuid().ToString();
}
It is invoked when either an unauthenticated user's anonymous ID is not available or is expired. After these settings the Request.AnonymousID will be available to every request of a unauthenticated user.
Request.AnonymousID
Tracking a request. As it is shown in the above snapshot of a log file, one user can make multiple requests in parallel that generates interleaving log records having the same user identification information within certain period of time. The user identification information alone is not sufficient for a more detailed analysis. One must find a way to track individual request during its life cycle. For a traditional ASP.NET request, this can be done by inserting the following as the first line of the controller method corresponding to the request
public async Task<ActionResult> SomeMethod(...)
{
HttpContext.Items["RequestTraceID"] = Guid.NewGuid().ToString();
...
}
which will be picked up by the HttpRequestTraceIDProvider class:
HttpRequestTraceIDProvider
public class HttpRequestTraceIDProvider
{
public override string ToString()
{
HttpContext c = HttpContext.Current;
if (c != null && c.Request != null)
return c.Items["RequestTraceID"] == null ? null : c.Items["RequestTraceID"].ToString();
else
return null;
}
}
However, the HttpContext.Current is available only to regular ASP.NET requests, therefore its Items property will not be there in any other kinds of requests, like Web API ones or SignalR ones. In case where there is no such a mechanism of passing request bounded parameters, it has to be done explicitly. One can use the TracedLogMessage class to wrap the logging message and the request identifier to be passed to the appender in the following way
Items
TracedLogMessage
log.Debug(new TracedLogMessage { ID = "...", Msg = "..." });
where the value of the ID property will be used as the request tracing identifier sent to the logging data service. However, the value "..." passed to the TracedLogMessage constructor needs to be passed from the start of the request down to every method involved during the request life cycle in which logging is made.
ID
Uncaught exceptions. It can be logged by adding the following handler inside of the Global.asax.cs file:
protected void Application_Error(object sender, EventArgs e)
{
HttpException hex = Server.GetLastError() as HttpException;
if (hex.InnerException != null)
log.Error("Unhandled exception thrown", hex.InnerException);
}
As it is mentioned above, for a multi-user ASP.NET application, the log records contain interleaving ones from multiple threads, processes and users. The above mentioned tagging means could help a reader in doing just that. Of course a reader could invent his/her own tagging means that suit his/her application.
The configuration for log4net is located inside of the Web.log4net file, as it is specified in the AssemblyInfo.cs file described above. The configuration for the current appender is provided as an example:
<appender
name="DataServiceLogAppender"
type="Archymeta.Web.Logging.DataServiceAppender, Log4NetServiceAppender">
<maxQueueLength value="5000" />
<maxUpdateBlockSize value="10" />
<recordStackFrames value="true" />
<userStackFramesOnly value="true" />
<maxStackFramesUp value="10" />
<lossy value="false" />
<!-- parameters for the appender as a client of the log data service -->
<loggerServiceUrl value="" />
<maxReceivedMessageSize value="65536000" />
<maxBufferPoolSize value="65536000" />
<maxBufferSize value="65536000" />
<maxArrayLength value="104857600" />
<maxBytesPerRead value="4096" />
<maxDepth value="64" />
<maxNameTableCharCount value="16384" />
<maxStringContentLength value="181920" />
</appender>
The following provides an explanation of various properties involved.
maxQueueLength: The maximum number of waiting log items to be sent to the data service. The default is 5000. The response of the system to a new log item when number of waiting log items has reached this value depends on the value of lossy. When lossy is true, then the new item will be discarded otherwise the addition statement will block until the number of waiting log items drops below this value.
maxQueueLength
lossy
ossy
maxUpdateBlockSize: The maximum number of log items to accumulate locally before they are been sent to the data service in one block. The default is 10. If the appender is not busy, it will send what is left in the local event block periodically. The client code does not need to "flush" them.
maxUpdateBlockSize
recordStackFrames: Whether or not the record method invoking stack frames at the log position. The default is true.
recordStackFrames
userStackFramesOnly: Whether or not to include stack frames whose code file is known. This is what "User" mean here. The default is true. If the corresponding .pdb file of an assembly is not available at the deployment site, then that assembly and the stack frames that reference it are not a "User" one.
userStackFramesOnly
maxStackFramesUp: Maximum number of stack frames to include. The default is -1, which means all. Modern frameworks could have quite deep call stacks even for simple user side calls. These stackframe information makes very limited sense to debugging user side codes. One can use this to put a limit on it if necessary.
maxStackFramesUp
lossy: Whether or not to drop a new log item when the waiting log items has reached maxQueueLength. The default is false.
loggerServiceUrl: An optional base url of an independent log data service. Specify it only when the targeting data service is different from the main data service of the application. If not specified, the main data service of the application is assumed. When it is set to am valid value, the following additional properties for the appender is used to setup the application as a client of the logging data service.
loggerServiceUrl
The following is for the WCF client binding:
maxReceivedMessageSize
maxBufferPoolSize
maxBufferSize
The following is for the readerQuotas corresponding to the binding:
readerQuotas
maxArrayLength
maxBytesPerRead
maxDepth
maxNameTableCharCount
maxStringContentLength
These properties should be set to be consistent with the configuration parameters of the logging data service.
There is no <layout> ... </layout> child node for this appender since it does not involve any "layout" projection of the data.
<layout> ... </layout>
The included package contains a demo web application that is set up for using the log system.
The start up of the web application is logged. This is done inside of the Startup.cs file in the root directory of the application:
Startup.cs
public void Configuration(IAppBuilder app)
{
ConfigureAuth(app);
log.Info("System start up");
}
The log-in and log-off activities are also logged, this is done inside of the ArchymetaMembershipStores.dll assembly under the Libraries of the solution. The source code for it is not included in this article for simplicity purposes. It can be found in this article (most like it has no logging inserted when your are reading the present article, but it is easy to do it your self).
ArchymetaMembershipStores.dll
Libraries
The following error handler is inserted into the Global.asax.cs file
protected void Application_Error(object sender, EventArgs e)
{
var ex = Server.GetLastError() as Exception;
if (!(ex is HttpException))
log.Error("Unhandled exception thrown.", ex);
}
to log un-handled exceptions. The following line is inserted to the About method of the HomeController class
About
HomeController
throw new InvalidOperationException("Invalid operation!!!!!!!!!");
to generate an un-handled exception to be recorded by the logger. This exception will be thrown whenever the About page is visited, but this is intentional, it's not a bug!
About
The following application end handler is inserted into the Global.asax.cs file
protected void Application_End(object sender, EventArgs e)
{
log.Info("The web application stopped.");
}
to record application end events.
After running the demo site the log data is recorded into the data service. The question of a reader might be: how do I view the log entries?
The answer to this question would be: it depends upon how you want the view them in this two stage logging solution. For a quick view, one can use the build-in query page of the data service. Visit the data service using a browser and try to find the EventLogs data set. After specifying the sorting conditions on the top, a list of log event will be displayed, like what is shown in the following.
Figure: Log entity raw data query page provided by the data service.
The sets of entities that depend upon a particular entity on the querying page are listed at the bottom left corner of the page after the said entity is selected. In the present case, only certain (one in fact) entities in the EventLocations data set depend on an entity in the EventLogs data set. The pop-up page for the dependent sub-set can be opened by clicking the corresponding name, "EventLocations" in the present case. Each entity displayed in the pop-up page may also has its own dependent entity sets, which can be displayed using the same method recursively. In our current case, each entity in the EventLocations data set has a sub-set of entities in the EventStackFrames data set that is depending on it, they can be displayed inside of a nested pop-up page by clicking the "EventStackFrames" button there (namely inside of the pop-up page for EventLocations).
This raw data viewing process may not suit everybody's needs since they are produced in a generic fashion. If a user would like to have more customized view of the log entries, including how they should be layout, he/she should design and develop his/her own layout and query applications for the log entries, following the instruction given here.
This is slightly off the topic of client (of data service) side "logging", but since it in itself is a kind of service side "logging" that could constitute a useful debug mean it could be useful to know if one tries to develop software using the data service since it can be used to display a sequence of data modification operations performed inside of the data service corresponding to an operation on the client side, like inserting a log entity graphs, user login/logoff, etc.
The data changing activities, including the ones changing the EventLogs data set, inside of the data source can be monitored on a web page for the data service, like what is shown in the following snapshot:
Figure: Data source wide change monitoring page.
The data service does not turn such a monitor on by default. One has to set the following item under the <appSettings> node inside of the Web.config for the data service to true value before using the monitor, namely
<appSettings>
true
<add key="DataSourceMonitoring" value="true" />
The monitor page has a relative path (url) of <data source name>/Monitoring, where <data source name> is the name of the data source, which in the present case is AspNetMember. it can be reached from the Main page for the data source by clicking the Monitoring link on top. The Main page is reached from the Home page by clicking the Data Source link.
<data source name>/Monitoring
Main
Monitoring
Data Source
What is shown here seems to be simple, however it only scratches the surface of a more capable system. The solution provides third parties a rich set of APIs that can be used to satisfy their logging and/or other event driven application needs: from simple visual layout, to intelligent query and analysis and even to provide a basis for certain real-time system capable of complex event processing (CEP).
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL):
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/783696/Service-Based-Logging-Solution | CC-MAIN-2017-22 | en | refinedweb |
139
Architecture of a Semantic Portal on Mobile Business
Ilja Krybus,Karl Kurbel
Chair of Business Informatics
European University Viadrina
POB 1786
D-15207 Frankfurt (Oder)
{krybus|kurbel}@uni-ffo.de
Abstract:Portals on the web are important public sources of information for
expert knowledge.They function as powerful gateways that consolidate ac-
cess and organize information.Existing web technologies provide the means
which most current web portals apply.However,they leave some open is-
sues that recent Semantic Web technologies promise to solve.Portals that
employ semantic technologies are called semantic portals.In this paper,we
present the synopsis of a semantic portal that is dedicated to distributing
practical and scientific knowledge on the domain of Mobile Business.We
explain the motivation,the architectural considerations,and the current por-
tal prototype.Emphasis is placed on ontology use,request processing,and
presentation.The advantages of our process-oriented and multilayered archi-
tecture approach are discussed.
1 Introduction
The internet has become one of the most frequently utilized sources for acquiring
topical information.Especially information on technologies and other innovations
is often available on the internet long before it is printed in journals or books.It
may even be exclusively disseminated through the internet.As a consequence,
strong dependencies from this source emerged for everybody who relies on such
information.
The more one relies on this source,the stronger is the exposure to the problems
that arose with the vast and growing amount of information available,such as the
distribution of resources,the increasing amount of time needed to locate valuable
information,and the limited relevance of results returned by web-searches.
A domain,for which these problems became recently very obvious,is Mobile
Business,which can be understood as a subset and descendant of Electronic Busi-
ness.With the maturing and spreading of corporate systems that are based on mo-
bile technologies throughout the business landscape,a lot of information has been
published online.Nevertheless finding relevant information on Mobile Business
remains difficult for the said issues.
The concept of portals allows concentrating information for a selective area of
interest.A prerequisite for distributing information using portals is to prepare and
140
systematize it for effective use.When doing this,the relatively limited semantic
breadth of the considered domain bears an additional challenge.This constraint
implies that information managed by the portal shares the same or similar termi-
nology,and that semantically distinct concepts may be similarly verbalized.As a
consequence,information becomes hard to discern by syntactical means,despite of
its even significantly different meaning.
Emerging semantic technologies promise remedy on such issues.They are de-
signed to enable machines (e.g.,Intelligent Agents) to locating and applying con-
tents and services on the web by providing themwith the semantics of the available
items.An implication of these technologies,i.e.the semantics are applicable to
provide added values to users,is of special interest to solving the existing problems
of information supply.Semantic technologies address the meaning rather than
plain syntax and allowa more precise treatment and selection of information.
Within the scope of an interdisciplinary joint project on Mobile Business
1
we are
developing a semantic portal (the Mobile Internet Business Portal) that will offer
consolidated access to practical and scientific knowledge in the regarding field.
In this paper,the state of our current work is described:in the remainder of this
section we explain our viewpoint on portals and semantic portals in particular.In
section 2 related research on semantic portals is explored.Section 3 covers the
portal’s architecture,as well as the design considerations that led us to it.It also
contains descriptions of selected subsystems of the portal.Emphasis is placed on
ontology use,request processing and adaptive presentation of contents (in general
and tailored for mobile devices).Section 4 concludes this paper.
1.1 Portals
There are many divergent definitions for portals.Definitions vary between func-
tional and technological foci,and range from describing structured websites to
complex information systems.According to the definitions summarized in [De05]
portals should be considered web-based application systems,or,“system[s] of
integrated programs”.In [Ka01] (community web) portals are defined as systems
that “essentially provide the means to select,classify and access various informa-
tion resources (e.g.,sites,documents,data) for diverse target audiences (corporate,
inter-enterprise,e-marketplace,etc.).” As summarized in [LW05],portals form “a
gateway to the web that allows the plethora of information […] to be organized and
customized through a single entry point”,and are “used to consolidate information
froma vast array of resources.”
An evaluation [Kr06] has shown that the salient property of portals,i.e.to offer a
single point of access,has two major implementations.Portals appear either as
self-contained systems that encompass all provided services and contents them-
selves,or as hubs that collate external resources.With portals like Semantic-
1
This research is supported by the German Federal Ministry of Education and Research under the
grant No.01AK060A.
141
Web.org
2
(in its current design),however,a third kind of portal has emerged:por-
tals,which integrate sets of community managed RDF statements,i.e.a multitude
of assertions about facts.
1.2 Semantic portals
Semantic portals,in a nutshell,are portals that make use of Semantic Web tech-
nologies.They “exploit semantics for providing and accessing information”
[Ma03],and they “typically provide knowledge about a specific domain and rely
on ontologies to structure and exchange this knowledge” [HS04].Semantic Web
technologies are applied to “constructing and maintaining the portal” [Ma03] as
well.The degree and the focus of technology usage,however,varies.Examples are
given in the next section.
We suppose that semantic portals could take the position of central building blocks
in constructing the Semantic Web [Kr06]:
By applying semantic technologies,they demonstrate the value of these technolo-
gies to a potentially large audience.Because they are reaching many customers,
portals could be employed for popularizing ontologies and establishing naming
conventions (e.g.,for named entities or domain specific taxonomies) across the
internet.New ontologies could be collaboratively elaborated within semantic por-
tals (cf.,e.g.,[Zh04]).
As was expressed in [Mc05],adding semantic descriptions to contents significantly
increases the efforts spent in designing information bases.This observation is es-
pecially true for small collections of information,for which the ratio of ontology
utilization versus its elaboration efforts is seemingly poor.When portals handle
rather large collections this ratio becomes more attractive and the said obstacle less
decisive.
Interconnecting portals seems to be more efficient than interconnecting diverse
small internet resources,since the number of necessary ontology mediations is
dramatically reduced.Not least,semantic portals may mediate between the Seman-
tic Web and the current web by wrapping non-semantic contents with their ontolo-
gies thus raising the amount of information that can be located and processed ex-
ploiting semantics.
2 Related work
Several applications of semantics are already widely disseminated,e.g.RDF Site
Summary
3
for distribution of news or blog-entries,Friend-of-a-friend or vCard
profiles for communicating contact data,and Dublin Core Elements for biblio-
graphic data.Research regarding semantics has been performed in a broad spec-
2
3
http://{web.resource.org/rss/1.0|}
142
trum,including acquisition,reasoning,searching,mediation,visualization of on-
tologies,and other topics.Because of the integrative nature of portals,all this is of
certain interest.In particular,research that already resulted in a prototype or a
production stable semantic portal is a good reference.Important representatives of
such portals are (among others) SEAL/OntoWeb,ODESeW/Es
peronto,DERI/SW-
Portal,and SWAD-E/SWED
4
.
The OntoWeb portal [HS04] [Sp02] is an extension of the SEAL framework
[Ma03] that serves as a communication and dissemination system for a thematic
network.It defines several content types (e.g.person,organization) and content
structures using concept definitions in an externally managed ontology.Portal
navigation is derived from modeled subclass-relations,and search is provided on
full text and instance properties.The portal is based on the ZOPE application
server and its content management system.
Esperonto portal [Co03] emphasizes that it is a semantic intra- and extranet portal,
which is used to disseminate the results of the Esperonto project.It combines mul-
tiple ontologies,which define the type and structure of information that in turn is
stored directly within ontologies.Concept subclass-relations are used for organiz-
ing content.In this solution two front-ends are differentiated,a web-based ontol-
ogy editor and the portal site.Information is stored in a database,which is accessed
through a dedicated ontology API.
The SW-Portal originally served as community portal within the DERI research
network [Zh04a].After a re-launch
5
,it is now billed as a “public entry point to
access semantic web related information” [Zi04].Compared to other portals,users
are much more involved in extending the portal’s ontologies [Zh04].According to
[Zh04a],in the SW-Portal servlets access the Jena framework
6
through intermedi-
ate services.
The Semantic Web Environment Directory (SWED),a meta-directory,was built as
a demonstration to illustrate the nature of the Semantic Web [Re04].It combines
(partially) automated content acquisition with the ability to create and annotate
resources locally.Instead of static subclass-relations,a dedicated ontology
7
is ap-
plied to the categorization and inter-linkage of content.The portal’s user interface
supports facetted browsing.Contents are presented using a template-engine that
recursively locates templates which have been directly assigned to each concept.
Other semantic systems function as “portal generators” for annotated web content
[Hy05],or consolidate access to traditional websites [AP05].A detailed compari-
son of four semantic portals (OntoWeb,Esperonto,Mondeca ITM,Empolis K42)
is documented in [La04].An example for a different utilization of semantic tech-
nology in portals is the support of inter-portlet-communication [Di05] [PP03].
4
http://{ontoweb.org|}
5
nowSemanticWeb.org
6
7
143
Taken together,existing portals have proven the feasibility,potential,and advan-
tages of the semantic approach over traditional web technologies.
Seen from an architectural viewpoint,most semantic portals (like other portals)
implement the popular 3-tier concept.On the data-tier,the structurally fixed data-
base (or content management system) is usually replaced wi
th an ontology store.
On top of this,more valuable semantic search and/or navigation features and meta-
data enriched presentation are realized.Internal component interconnection fre-
quently seems to be rather “hardwired”.At least documentation reveals a certain
shortage of more flexible component coordination and processing control.It is also
remarkable that some portals are even almost excluding content produced for the
traditional web.As a result,traditional content is not directly managed within the
portals.Web front-ends often consist of generic templates (e.g.table based views).
Adaptive presentation of content and support of access frommobile devices remain
open issues.
3 The MIB approach to semantic portals
Summarizing the first two sections,the core requirements for semantic portals are:
• to store,organize and to manage semantic information (including content,
meta-information/annotations,relations and structures),
• to integrate “traditional” web-content with semantic information,
• to provide means to semantically access content and to provide this access
independently from criteria that are implied in the content (namely the terms
used within the content and its specific syntax),
• to supplement semantic applications,e.g.automated content acquisition
[Re04] or exchange between portals [HS04] (not dealt with in this paper),
• to present contents adopted to the context of information requests.
In conclusion of section 2 more attention should be paid to the portal architecture
by optimizing it for maintenance,extendibility and reusability.
The MIB Portal is developed to meet these requirements.It exploits ontologies for
structuring information and for facilitating internal operations.Lessons learnt from
successful solutions are taken into consideration.
3.1 Architectural principles
Portals are integrated systems,open in nature,and may thus face rapidly changing
demands.A portal that was originally deployed in a certain configuration will
hardly remain unchanged over time.In order to provide the necessary degree of
adaptability,the portal should be flexible in configuration and design,highly
modular,and consist of rather self-contained components,which are loosely cou-
pled to performrequested actions.For cost-effective maintenance and development
high reusability of existing components is desired.
144
Standard-compliant programming,layering,utilization of Design Patterns [Ga97]
or abstraction libraries/frameworks can increase the level of adaptability.Aside
from these measures,a closer look at possible reasons for changes could reveal
further opportunities to create sustainable systems:the need to change may occur
in response to modification of (1) technical specifications,(2) use-cases/feat
ures,
(3) processes and/or information flows,(4) visualization and interaction needs.
As a consequence,to minimize the extent of necessary changes,we decided to
replace the common 3-tier-architecture in the portal with a different layering
scheme.In this scheme,four dedicated layers are distinguished:Service (SL),
Business (BL),Process (PL),and Visualization Layer (VL).(A coherent approach,
Quasar,which considers two of the layers listed above,is explained in [Ha05].)
The architecture is depicted in figure
Figure 1:Mobile Internet Business Portal architecture overview:physical and logical com-
ponent distribution by applied four layer approach
The SL contains all components whose tasks are mostly technology oriented,rather
generic,and independent from concrete business logic.Some examples are data-
base persistence,file management,generation of system-wide unique identifiers,
and delivery of asynchronous messages.Interfacing third-party systems is another
important task for SL components.The SL components increase the granularity of
145
interfaces and data.Inquirers will not have to deal with elementary,technology
specific items.They will work on compound documents,rather than on system-
specific blocks;on frame-like objects,rather than on loose sets of RDF statements,
etc.SL components can internally optimize system access and emulate missing
functionality,if external systems do not provide it.
At the BL,components encapsulate all logic relevant to performing user tasks,
abstracted fromspecific technologies.Customtreatment of documents or personal-
ization may serve as example.In contrast to the SL,significant knowledge on both
information and its meaning is exploited.Information is dealt with by case-based
suppositions.Implemented logic is specialized to solve single facets of specific
(business) use-cases.However,the services are not interconnected to processes but
remain isolated and atomic.
The orchestration of services,interconnection and sequencing,is done at the PL.
Declarative process models replace the “hardwired” programming.As a result,
substitution or extension of actions performed within the process becomes very
straightforward.E.g.,an existing retrieval process (locate,load,extract,present
information) may be enhanced by declaring personalization (e.g.by information
filtering) an intermediate step in the process model without reprogramming exist-
ing modules.Changes can be applied even online at the serving portal.Variants of
existing processes may be easily deployed.Another option is enabling dynamic
definition of processes,in which successive actions are inferred within specialized
process nodes (i.e.specialized “process actions” in figure 1) that evaluate context
and semantic information that resulted fromthe current action.
Parameters,results,and initialization data make up the process context.Services
within a process communicate via this context.Decisions to branch are realized by
configurable nodes that consider the context too;services are thus not directly
involved.In principle,the context of each process can be persisted on any event.If
a user-controlled process is disrupted (e.g.,in a mobile scenario:disconnection
along transmission path,suspended work due to situational needs),work can be
resumed.
Beyond coordination of services,the PL is responsible for integrating users into the
portal.If user interaction is expected,the process switches into a wait-state.A view
(e.g.,state information,document output,web-form) is chosen based on the current
process context.The user’s options are derived fromthe declared process model.
The VL implements the user interface.It adopts the MVC pattern:the model (e.g.a
DOM or ontology instance) and state information are retrieved from the process
context,and the control delegates processing to the portals backend.Only tasks
that are specific to presentation (e.g.zoom-in,breadcrumb-navigation) are com-
pletely handled within this layer.In addition to state-based views other views are
still allowed.
146
3.2 Use of ontologies
The portal supports complementary ontologies for annotating,classifying and defi-
ning content.
An ontology is employed as system of taxonomies (categorization bodies distinct
from subclass-relationships) to externally describe contents,which are stored in or
linked with the portal.This descriptive ontology adopts the Simple Knowledge
Organisation System (SKOS)
8
which was designed to organize knowledge by us-
ing controlled vocabulary.Several technical taxonomies describing facets of the
domain are transformed into SKOS.For example,terms and structure of the Euro-
pean Nomenclature of economic activities (NACE)
9
statistical taxonomy are trans-
lated into concepts and broader/narrower-relations of SKOS for expressing indus-
trial sectors.
When publishing their contributions,authors can select categories to assign to their
content.From these assignments,semantic navigation and search can be derived,
and semantic proximity of contents is detected.Annotating content with category
assignments,however,remains optional in the portal.Detailed annotation might
enhance a contribution’s placement/ranking,e.g.,in searches.Nonetheless,all
contents,including those which are not annotated,are accessible.
Ontology also defines properties and rules for structured information;i.e.concepts
like “announcement of a conference”,“description of a project” or “profile of a
person”.Some of themoriginate frompublic ontologies like vCard or FOAF,while
others have been derived from typical documents (e.g.the conference-announ-
cement concept fromcalls-for-papers).
Concepts may be instantiated and directly stored with (a specific sub-) model.As a
result,instances are described through their contents,optional categorization,and
the knowledge about their structure.This knowledge can be used to interpret or to
adapt information towards the user’s situational needs.While categorization allows
selecting relevant contributions,structural knowledge enables picking information
items particular to a given context.In the case of multi-access (e.g.supplementary
support of mobile devices),the rather verbose output sent to a web client can be
reduced to a summary of the most important facts.
Traditional web content (web-pages/sites,documents,images,etc.) may be “wrap-
ped” by ontology instances.The latter will then reference either external resources
or documents that are stored in the internal content management system.In both
cases,the semantic wrapper confers annotation (mainly Dublin Core and relations
to categories) to these resources.
The provision of such external meta-information allows jointly managing semantic
and traditional web content within the same portal-engine.Therefore,all contents
of the portal can be registered in the ontology.They are accessible through ontol-
8
9
147
ogy queries,which may (but do not necessarily need to) facilitate inferred state-
ments besides the asserted ones.Asserted and inferred statements are stored in
separate models.
Beside organization and storage of content,the portal applies ontologies in order to
perform several tasks concerning service-location,processing,an
d presentation of
information.Among these tasks are ontology-based rendering and information
streamlining for mobile access (which both are covered in consecutive subsec-
tions).
3.4 Request processing
At process-oriented portlets,incoming action requests are differentiated:if the
request is constrained to the VL then it is handled within the portlet itself.Other-
wise,a related process instance is acquired through a process-manager.Alterna-
tively,new process instances may be created.Request parameters are transformed
to a serialized RDF-representation and communicated to the process’s context.
Depending on actions associated with the request,the portlet may cause the proc-
ess to transit to another state.
In a transition,declared BL components are obtained by name using the Service
Integrator,which also considers the semantic role and location of a service in-
quirer.Methods of the obtained components are executed parameterized by the
process’ context.The processing results are then stored back to it.
From here to the next wait-state (e.g.,a user-interaction) the process drives itself.
The request is typically served in a synchronous operation,which is sufficient for
most user-controlled functionality on a web portal.Optionally,processes can be
controlled asynchronously by internal or external “system actors” like Intelligent
Agents or message-driven systems.
Render requests do not cause transitions and do not necessitate parameters to be
forwarded to the backend-system.A portlet’s view method therefore retrieves the
current (wait-) state fromthe associated process instance and selects a view module
that is assigned to or inferred for the given state.
3.5 Ontology-based rendering
A particular subsystem is used for the presentation of contents that are stored as
instances of the ontology.Within this system,we deployed templates that adap-
tively layout and arrange information for each considered concept (cf.[Re04]).
Due to this specialization,it becomes easy to highlight important information and
to group coherent facts.Generic views are yet supported within the development
environment only.Reasons to limit use of generic views are enforced cutbacks on
creative design as well as the focus on an audience,whose experience is formed by
traditional web portals.
148
Templates are chosen according to an instance’s concepts (classes) and with re-
spect to their relations and optional context (e.g.the requesting client-system).If
there are no templates provided for the known direct concepts,a meaningful substi-
tute template is proposed as the result of an analysis of concept relations.Generali-
zation/specialization (subclass) relations are the natur
al choice for such proposals.
The mechanism,however,is not restricted to them.
Most concepts defined within our ontology declare literal and object properties.
Object properties in turn can have multiple or very generic concepts as a range.As
a consequence,template development could become difficult.In response to this
issue,our template-mechanism uses recursion if object properties have to be ren-
dered.Thus,e.g.rendering an address (that is modeled as an object property of an
organization) will be automatically delegated to an appropriate template,which
simplifies specific treatment of domestic vs.international (or other) addresses.The
system circumvents inadvertently modeled cycles and unnecessarily deep nesting.
Supplementary context parameters are used to control the output fromthe recursive
templates.
Additionally,we distinguish two types of instances at the presentation stage that
are indicated by object properties:autonomous and implicit instances.
All instances that are considered individual portal content,and which are compre-
hensive even without being referred to by the object property,are handled as
autonomous instances.They are provided with a public identifier and are locatable
using the portal’s browsing or search facilities.Examples are conference an-
nouncements,organization profiles and thematic documents.For such instances,
there is no need to incorporate full information.Rather,they are usually presented
as links within the portal.(In figure 2,the instances linked by the hasOrganizer-
property would be treated as autonomous instances.)
149
Figure 2:A‘conference announcement’ instance references distinct concepts to ‘hasLoca-
tion’ and ‘hasImportantDate’ object properties,which are automatically considered,e.g.,
when sorting instances.
Implicit instances were defined to simplify information input,search
and filtering
as seen from the user’s viewpoint.Inspired by the object-oriented principle of
composition,they are created to hold information which is meaningless in isola-
tion,i.e.without being additionally described by the semantics of a referring object
property.Dates,durations,and addresses are examples of this instance type:when
they are presented without a context (e.g.,a date not in relation to an instance of a
conference announcement) then they are barely of any value to an agent.Such
instances are never displayed without the surrounding context.Depending on the
context,they will be presented within the referring autonomous instance in detail,
summarized,as a list-entry,or as another layout alternative.
Because it is frequently required to output information as lists or tables,the ontol-
ogy-based rendering subsystem is extended with a developed framework that al-
lows instances to be meaningfully sorted and grouped.Supported instances can be
ordered with respect to a chosen criterion,even if they belong to distinct concepts.
For example,if a “conference announcement” instance with the object property
“important dates” is assigned dates (06-08-01) and (06-12-31),and duration (06-
09-21 to 06-09-23),the system will return (06-08-01),(06-09-21 to 06-09-23),(06-
12-31) as a meaningful timeline.If instances are not comparable by the criterion,
the ordering by classes or instance identifiers is used as default fallback rule.An
instance that cannot be compared would be placed below the resulting schedule.
Before this final option is chosen,a resolution mechanism similar to the one used
for template-selection is attempted.
3.6 Mobile access
The portal supports access by mobile devices such as PDAs,and smart phones.
Mobile devices are discriminated by evaluation of client-signatures,which are
transmitted with the headers of HTTP-requests.Capability profiles are then loaded
with respect to the client from a profile database (in first priority:extended
WURFL
10
) in order to choose presentation level protocols (WML,X-HTML MP,
HTML) and to parameterize further rendering (support of images,width of display
etc.).
Mobile users are offered a tailored subset of the entire portal’s functionality.The
navigation is flattened and adapted to the special characteristics of mobile devices.
The ontology-based rendering system is reused in order to streamline information
presented for ontology instances.
The presentation of mobile content is special in the portal in that the presented
information is pre-selected and reduced to the most important facts.This output-
10
150
streamlining is based on assumptions that have been made on the value of informa-
tion items and which are considered for the related templates.As an alternative to
this approach,we are experimenting with content priorities that are directly mod-
eled within the ontology itself in order to choose the important items more adap-
tively,regarding the user’s context.
Another option which is experimentally implemented and addressed for the use in
mobile devices is the support of (simultaneous) multimodal access to information,
i.e.in the case of the portal’s prototype a combined visual and aural interaction on
the basis of the XHTML+Voice [Ax04] standard proposal.
In a multimodal environment especially the aural interfaces bear new and yet un-
solved challenges:aural interfaces are always fully serialized;users can listen
attentively only for a relatively short amount of time;not all visualized information
remains meaningful if it is spoken;and sometimes information must be extended to
become comprehensible when it is spoken.While the user perceives visual infor-
mation in an implicit context (e.g.,information is grouped and laid out on the dis-
play),this implicit context is almost completely lost after serialization.The missing
context must thus be added by transforming displayed facts into a set of valid and
interconnected sentences.
Figure 3:Semantics applied for multi-device and multimodal output
Ontology-based rendering addresses some of these issues.In the prototype,the
missed context is supplemented applying the modeled semantics for the content.
E.g.,in an instance of a conference announcement the conference location is bound
to the event by inserting an appropriate transition (cf.figure 3),which is derived
from the related object property.Content,which a screen reader application would
usually misinterpret (e.g.,a duration would be read as a formula),is spoken cor-
rectly.
4 Conclusion and future work
In an interdisciplinary project,we develop a semantic portal within the Mobile
Business domain to meet the requirements summarized in the preface to section 3.
As in other semantic portals,the MIB Portal uses ontologies for annotation,man-
agement,and for storage of contents.Additionally,ontologies ground internal
decisions on portal operations like the presentation,which is implicitly controlled
151
by the modeled knowledge.With our approach,semantic portals are extended by
introducing a dedicated layer that coordinates automatic and user-driven processes.
This allows to flexibly modifying and extending the portal’s functionality.In many
cases even seamless upgrades are possible by parallel operation of different proc-
ess versions.The differentiation of the portal’s modules
with respect to the pro-
posed four layer model reduces the impact of changed specifications,and thus,
maintenance efforts.High reusability is achieved since the existing modules can be
recombined by declaration rather than reprogramming to serve new tasks.
Aside from rich technological advantages for portal operation and development
that the semantic approach provides jointly with the proposed four layer architec-
ture as were summarized above,the key values are achieved for portal users.The
combination of the portal concept with semantic technologies bears the potential to
provide systematic access to knowledge in selective domains.At least within the
portals,the application of semantics reduces information related issues that arise
fromthe need to locate relevant information in huge collections.
The current work has concentrated on the technological foundations of the portal.
Future work will include transformation and customization of the portal for the
production phase.Supplementary means to semantically access content and further
semantic applications will be investigated and implemented.Although rather flexi-
ble rendering mechanisms are already realized (cf.sections 3.5 and 3.6),a higher
degree of adoption to context,mobility and multimodality is still appreciable.They
will be addressed in future research.
References
[AP05] S.Auer,B.Pieterse.Vernetzte Kirche:Building a Semantic Web.In Proceed-
ingsofISWCWorkshopSemanticWebCaseStudiesandBestPracticesfor
eBusiness(SWCASE05),2005.
[Ax04] J.Axelsson,et al.XHTML+Voice Profile 1.2,16 March 2004,available at.
[Co03] O.Corcho et al.ODESeW.Automatic Generation of Knowledge Portals for
Intranets and Extranets.In:Fensel,D.et al.(eds.):ProceedingsofTheSemantic
Web-ISWC2003,SecondInternationalSemanticWebConference,LNCS,
Vol.2870,Springer (2003) 802-817.
[De05] V.Devedžić.Research community knowledge portals.,In:Int.J.Knowledge
andLearning,Vol.1,Nos.1/2 (2005),pp.96–112.
[Di05] O.Díaz et al.Improving Portlet Interoperability Through Deep Annotation.In:
Proceedingsofthe14thInternationalWorldWideWebConference(WWW
2005) 372-381
[Ga97] E.Gamma et al.DesignPatterns.ElementsofReusableObject-OrientedSoft-
ware.Addison-Wesley Professional (1997).
[Ha05] M.Haft et al.The Architect’s Dilemma – Will Reference Architecture Help?In:
Reussner et al.(eds.):QualityofSoftwareArchitecturesandSoftwareQuality,
152
ProceedingsofQoSA-SOQUA2005,LNCS,Vol.3712,Springer (2005) 106-
122.
[HS04] J.Hartmann,Y.Sure.An Infrastructure for Scalable,Reliable Semantic Portals.
In:IEEEIntelligentSystems,19(3):58-65,2004.
[Hy05] E.Hyvönen et al.MuseumFinland -- Finnish Museums on the Semantic Web.
In:JournalofWebSemantics,Vol.3,No.2 (2005),pp.25.
[Ka01] G.Karvounarakis.et al.:Querying RDF Descriptions for Community Web
Portals.In:Proceedingsof17iemesJourneesBasesdeDonneesAvancees
(BDA'01),(2001) 133-144.
[Kr06] I.Krybus.Wissensportale imWeb.In:ArbeitsberichteMobileInternetBusiness,
Nr.4,April 2005,ISSN1861-3926.
[La04] H.Lausen et al.SemanticWebPortals–StateoftheArtSurvey.Technical
Report DERI-TR-2004-04-03,Digital Enterprise Research Institute (DERI)
(2004),available at-
/DERI-TR-2004-04-03.pdf.
[LW05] S.Li,W.A.Wood.Portals in the Academic World:Are they Meeting Expecta-
tions?In:JournalofComputerInformationSystems,Summer 2005,pp.32-41.
[Ma03] A.Maedche et al.SEmantic portAL - The SEAL approach.In:Fensel,D.et al.
(eds.):SpinningtheSemanticWeb,MIT Press,Cambridge,MA (2003) pp.317-
359.
[Mc05] R.McCool.Rethinking the Semantic Web,Part I.In:IEEEInternetComputing,
Nov./Dec.2005,pp.86-88.
[PP03] T.Priebe,G.Pernul.Towards Integrative Enterprise Knowledge Portals.In:
Proceedingsofthe12thIntl.ConferenceonInformationandKnowledgeMan-
agement(CIKM2003).
[Re04] D.Reynolds et al.SemanticPortalsDemonstrator–LessonsLearnt.SWAD-
Europe deliverable 12.1.7,available at-
reports/demo_2_report (2004).
[Sp02] P.Spyns et al.OntoWeb - A Semantic Web Community Portal.In:Karagiannis,
D.and Reimer,U.(eds.):ProceedingsofPracticalAspectsofKnowledgeMan-
agement,4th International Conference,PAKM 2002,LNCS,Vol.2569,
Springer (2002),pp.189-200.
[Zh04] A.V.Zhdanova.The People's Portal:Ontology Management on Community
Portals.In:Proceedingsofthe1stWorkshoponFriendofaFriend,SocialNet-
workingandtheSemanticWeb(FOAF'2004),pp.66-74.
[Zh04a] A.V.Zhdanova et al.SW-Portal Prototype:Semantic DERI Use Case.Deliver-
able 15,2004-08-31,DigitalEnterpriseResearchInstitute(DERI) (2004),avail-
able at.
[Zi04] K.Zimmermann.UsageScenarios(Relaunch SW).Deliverable 20 v0.3,15
February 2005,Digital Enterprise Research Institute (DERI) (2004) | https://www.techylib.com/el/view/grassquantity/interest._ap_rerequisite_ford_istributingi_nformationu_sing_porta | CC-MAIN-2017-22 | en | refinedweb |
Posted 16 Jan 2013
Link to this post
I am having some problems with getting Telerik dataforms to work correctly. Everything seems fine, except that any field which is an Enum does not want to persist it's values back to the original entity. So I can create new entities and edit existing entities, but the enum fields are uneditable.
I have tried letting the data form auto create the fields, I have tried specifying the fields in XAML, I have tried using a simple viewmodel with auto-implemented fields and tried using a backing store with INotifyPropertyChanged events.
Nothing works. The setter on the enum fields is just never called, although I can trace the setter on every other field getting called.
Entity Code:
public class EventTile
{
[GenericListEditor(typeof(EventTypeInfoProvider))]
public string EventType { get; set; }
public string Title { get; set; }
public DateTime EventDate { get; set; }
public TileSizeEnum TileSize { get; set; }
public TileTypeEnum TileType { get; set; }
}
Save Method on DetailsPage.xaml.cs
private void Save_Click(object sender, EventArgs e)
{
radDataForm.Commit();
App.ViewModel.SaveEventTiles();
NavigationService.GoBack();
}
XAML
<telerikInput:RadDataForm x:
<Grid>
<telerikInput:DataField
<telerikInput:DataField
<telerikInput:DataField
<telerikInput:DataField
<telerikInput:DataField
</Grid>
</telerikInput:RadDataForm>
In no case can I get the TileSize or TileType values to be updated correctly. Every other field works as expected. Any help would be appreciated.
Posted 17 Jan 2013
Link to this post
Posted 23 Jan 2014
Link to this post
Posted 24 Jan 2014
Link to this post | https://www.telerik.com/forums/databinding-enum-values-to-dataform | CC-MAIN-2017-47 | en | refinedweb |
.
I have written a sample program where we can create our own printf function with LOG LEVEL. At present I am just supporting the option to print a character, string and a integer.
#include <stdio.h> #include <stdarg.h> //LOG LEVELS typedef enum { LOG_DEFAULT, LOG_INFO, LOG_ERROR, LOG_DEBUG }LOG_LEVEL; void LOG_TRACE(LOG_LEVEL lvl, char *fmt, ... ); int main() { int i =10; char *string="Hello World"; char c='a'; LOG_TRACE(LOG_INFO, "String - %s\n", string); LOG_TRACE(LOG_DEBUG, "Integer - %d\n", i); LOG_TRACE(LOG_INFO, "Character - %c\n", c); LOG_TRACE(LOG_INFO, "\nTOTAL DATA: %s - %d - %c\n", string, i, c); return 1; } /* LOG_TRACE(log level, format, args ) */ void LOG_TRACE(LOG_LEVEL lvl, char *fmt, ... ) { va_list list; char *s, c; int i; if( (lvl==LOG_INFO) || (lvl==LOG_ERROR)) { va_start( list, fmt ); while(*fmt) { if ( *fmt != '%' ) putc( *fmt, stdout ); else { switch ( *++fmt ) { case 's': /* set r as the next char in list (string) */ s = va_arg( list, char * ); printf("%s", s); break; case 'd': i = va_arg( list, int ); printf("%d", i); break; case 'c': c = va_arg( list, int); printf("%c",c); break; default: putc( *fmt, stdout ); break; } } ++fmt; } va_end( list ); } fflush( stdout ); }
Your comments are moderated | http://codingfreak.blogspot.com/2010/08/printing-logs-based-on-log-levels-in-c.html | CC-MAIN-2017-47 | en | refinedweb |
Autocomplete and "Find Usages" to stopped working properly in QtCreator 2.4.0 (on Windows 7 64-bit)
I use QtCreator 2.4.0 on Windows 7 64-bit. Until a couple of days ago autocomplete was working fine.
- Suddenly it no longer shows any completion suggestions for many member variables in various classes.
- Also I can no longer get the hyperlink to definitions and declarations of members and methods by holding down Ctrl and hovering over over them. This occurs for some items, not all.
- In addition, the "Find Usages" command no longer works for those same items that have the above problem.
Is there a QtCreator-generated database that it produces for each session or each project? I could try deleting those to see if they get recreated properly. At least that's what I had to do in Visual Studio.
I have tried Tools->C++->Update Code Model, but that did not help. I also deleted the session file C:\Users<username>\AppData\Roaming\Nokia\qtcreator<session>.qws, and recreating it - that did not help.
Found the issue, although I'm not sure if this considered a QtCreator bug or not.
I had 3 projects in my session. Two of those projects have a class called MainWindow. When I Ctrl+clicked on the constructor declaration in one project, it took me to the MainWindow constructor definition in the alternate project. Since the second project does not have all the members and methods of the first, the hyperlinking and connectivity was all messed up.
The question is why was QtCreator pairing the declarations and definitions from two different projects even though each project had both declarations and definitions for the respective classes.
I understanding using namespaces would have prevented this, but regardless of using the good practice or not, it seems that QtCreator should have paired it correctly (after all it was working fine for the longest time).
- koahnig Moderators
Thanks for sharing the reason of your problem.
I am not sure either, if it is bug or something has been messed up over time. I will see, that someone knowledgable in Qt Creator will have a look on your post. | https://forum.qt.io/topic/13437/autocomplete-and-quot-find-usages-quot-to-stopped-working-properly-in-qtcreator-2-4-0-on-windows-7-64-bit | CC-MAIN-2017-47 | en | refinedweb |
MongoDB (from “humongous”) is an open source document-oriented database system developed and supported by 10gen. It is part of the NoSQL family of database systems. It is extremely easy to install and use and supports most popular programming languages. Here is a simple java application to add and query data.
import com.mongodb.MongoClient;
import com.mongodb.MongoException;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.BasicDBObject;
import com.mongodb.DBCursor;
52.709179 -8.612134
Advertisements | https://paulhartigan.net/category/nosql/ | CC-MAIN-2017-47 | en | refinedweb |
This document introduces fundamental concepts related to the design of repetitious processes. Readers of this document may benefit from a review of Flowcharting Symbols and Logical Control Structures. Readers who have difficulty rendering flowcharts are provided with links to alternative text-based outlines prior to each flowchart below. For specific examples of loop algorithms and C++ Language code, view the web pages entitled Example of a Counting Loop (Repetition Structure) and Analysis of an accumulation using the repetition structure.. The condition is normally based on the value of a single variable known as the control variable. The step or steps to be repeated are referred to as the loop body. Each execution of those steps is referred to as a pass or iteration. The step at which the body starts is known as the loop entrance and the step at which the test to branch back fails (causing repetition to cease) is known as the loop exit. The illustration below shows two variations of control structure commonly used by analysts when designing repetitive processes. Readers who have difficulty rendering flowcharts can read the alternative [text-based outlines] for these examples instead.
The major issues involved in loop design are: structure, method of control, and boundary conditions. These will be discussed in detail in the following paragraphs, but are listed briefly below to introduce them.
When programmers want to repeat a step within a program a known number of times, some additional steps will be required to implement the repetition. A test must be added to determine if a repetition should take place or end. Other additional steps might also be required if the repetition is dependent on controlling a counter of events (passes). the entrance are said to use a pretest (leading) decision and those with their tests after their entire body use a posttest (trailing) decision. Programming languages offer a variety of statements to implement the two repetitive control structures. Beware that many of the keywords used to do this are not handled in the same manner by each language. Some of the more popular keywords are: while, for, do, and until. Some times these are used in combination. Students are cautioned to avoid using such keywords to describe the structure of a loop because each language uses these words differently. Authors of textbooks about the C++ Language often refer to leading decision loops as "while-do" loops, and refer to trailing decision loops as "do-while" loops. Others call trailing decision loops "repeat-until" loops. Do not fall into this bad habit of using language keywords to discuss logical control structure. Stick with the terms "leading" and "trailing" decision. They are not language-dependent.
Many loops can be written using either the pretest or posttest decision structure without any detrimental effect on the results. But (for example) if I was writing a loop to repeatedly display lines on a report that were based on complex calculations, I would probably choose to use the posttest decision structure. The trailing position of the test would guarantee that I would see at least one line of output even if the conditions forced an exit after the first pass. If my program's calculations were producing flawed results and I chose (unwisely) to use a pretest decision structure, the test might force an exit from the loop before I (or the user) had a chance to see any of the values causing the trouble. In a different program involving erasure of data or activation of dangerous equipment, I might want to guarantee that a test was always performed prior to any action being done in the body of a loop (because its action was hazardous). In that case, I would choose to use the pretest decision structure.
Novice programmers often develop loops that perform their test in the middle of the body as shown in the illustration below. This approach appears to be quite logical and is often more efficient that either the leading or trailing decision approaches. And yet, many modern programming languages have no command to implement this approach because it has been determined to be more likely to contain flaws and difficult to debug. Any loop design that tests in the middle can be redesigned to test at the entrance or exit, but will usually require the addition of some extra steps (overhead) to accomplish the its objective. The benefit is that almost all programming languages have commands to implement both the leading and trailing decision structures.
The choice of control method is dictated by whether the decision to perform repetitious steps is supposed to be controlled by the user or by the programmer. If the decision to repeat is to be based on a value entered by the user, then the control method is sentinel (a.k.a. external) control. If the decision to repeat is to be based on a value established and controlled solely by the programmer without any input by the user, then the control method is counting (or internal) control. In some loops, the decision condition is not as simple. It is based on more than one factor; one user-defined, another programmer-defined. Situations like that use hybrid (combined) control and involve more complex conditional expressions.
It is important to draw a distinction between loops that involve counting as part of their purpose and other loops that use counting as their method of control. Just because a loop involves counting, does not guarantee that its control method is based on the value of the counter. So, not all loops that count are "counting controlled" loops. Sentinel loops might also do some counting.
Consider the following illustration that shows two different structural approaches that could be used in designing a loop that requires the counting method of control employing a counter variable labeled C. The objective of the loop is to display the word "Hello" five times on separate lines. In this example, the counter C has nothing to do with the action to be repeated (display of the word) except to control how many times the action will take place. Some loops do contain bodies that involve the counter. This would be the case in this example if the object of the body was to display the value of the counter instead of the word "Hello", in which case the output would be a column of numerals (1 through 5). (For an example of such a loop see the web page Example of a Counting Loop (Repetition Structure).
The flowchart below on the left shows the original process using the posttest decision structure. The flowchart below on the right shows the original process using the pretest decision structure. Readers who have difficulty rendering flowcharts can read the alternative [text-based outlines] for these examples instead. As stated in the section above about structure, the choice of one looping structure over another often has no effect on the ability of the structure to accomplish its objectives. Both of the structures below will work equally well to accomplish the task.
The comments included in the flowcharts above relate to the fact that all counting controlled loops contain (at least) four basic elements. These are:
These elements do not always occur in the order shown above, but they are always present (in counting controlled loops).
The C++ source code for the trailing decision approach shown in the flowchart above on the left would be
#include <iostream> /* Standard Input/Output header file */ using namespace std; int main () { int C = 1; /* Initialize counter to start at one */ do { /* Start a pass through the loop */ cout << "Hello\n"; /* This step is the "body" of the loop */ C = C + 1; /* Increment C by one */ } /* End the pass through the loop */ while (C<=5); /* Test for repetition/exit AFTER the body */ return 0; /* Return zero error code to parent process */ }
Note the need for the braces { } surrounding the body and increment. The
do statement must
contain the actions to be repeated. It (like most branching oriented statements in C++) can perform only
one statement. So if we need to have more than one performed, they need to be enclosed in braces
to have C++ treat them as a compound statement. The
while statement performs the test of the
parenthesized condition. If it is true control branches back to the
do statement. If not,
or not, control continues in sequence to the next statement following the
while statement.
The C++ source code for the leading decision approach shown in the flowchart above on the right would be
#include <iostream> /* Standard Input/Output header file */ using namespace std; int main () { int C = 1; /* Initialize counter to start at one */ while (C<=5) /* Test for repetition/exit BEFORE the body */ { /* Start a pass through the loop */ cout << "Hello\n"; /* This step is the "body" of the loop */ C = C + 1; /* Increment C by one */ } /* End the pass through the loop */ return 0; /* Return zero error code to parent process */ }
We also need braces { } surrounding the body and increment here. The
while statement performs
the test and if the parenthesized condition is true, executes the single (or braced) statement(s) beyond
the
while statement. Then control branches back to the
while statement to perform
the test again. If the condition is false, control branches ahead (in the code) to the next statement
following the single (or braced) statement(s) beyond the
while statement (in this example:
return 0).
In the loops above, the value of the control variable is not directly involved in the steps being repeated. The counter must step through five values, but the actual values are not intimately involved in the process being repeated. In such cases, any values for the counter would be acceptable as long as the desired quantity of repetitious passes occurs. The counter could run from 1 to 5, 11 to 15, or down from 5 to 1, and produce the same five events. The counter could also step using increments other than 1. For example, the following automatic looping statement (although a bit odd) would accomplish the goal of the "Hello" loop above:
float N; for ( N=1.1; N<=1.5; N=N+0.1 ) cout << "Hello\n";
(If you are not familiar with automatic "for loops", look in your textbook in chapter 5.) Often when designing loops, the exit value of the control variable is important to us. For example, consider the following loop:
int C; C = 1; while (C<=5) { cout << "Hello\n"; C=C+1; }
In the loop above, the variable C would have an exit value of 6, because it had to exceed 5 for the test to produce a false result and allow an exit from the loop. However, the last value that user would see would be 5. The loop design could be altered to guarantee that the last value displayed was the exit value. One such design would be:
int C; C = 0; while (C<5) { N=N+1; cout << "Hello\n"; }
Note that:
Designers should always consider the entrance and exit values of all variables that are to be affected by a loop. | http://gibsonr.com/classes/cop2000/repnotes.html | CC-MAIN-2017-47 | en | refinedweb |
Zaurus 3200 Trying to Debug Python Program
on pymoney I can now get a devide by zero error when I tap on a budget icon
( I have Posted The Traceback To The sourceforge List ) &
I have Tried a google search and am part way through the python docs
I thought I would install pdb ( python de bugger) and have a go myself ... so the FUN Starts
I Cannot find a python-pdb ipkg ??
import pdb does not work
Ok so I then installed python-pylint & python-pychecker
Both appear to have a incorrect first line in /usr/bin/pychecker & /usr/bin/pylint
(They both say that python is in /home/hrw/..I686 Directory & the Lib directory is set incorrectly also ...) | http://www.oesf.org/forum/lofiversion/index.php/t22002.html | CC-MAIN-2017-47 | en | refinedweb |
COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
- Audra Gordon
- 2 years ago
- Views:
Transcription
1 COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network card 1 Network card 2 1
2 I/O Problem (I) Every node has its own local disk no globally visible file-system Most applications require data and executable to be locally available e.g. an MPI application using 100 nodes requires the executable to be available on all nodes in the same directory using the same name I/O problem (II) Current processor performance: e.g. Pentium 4 3 GHz ~ 6GFLOPS Memory Bandwidth: 133 MHz * 4 * 64Bit ~ 4.26 GB/s Current network performance: Gigabit Ethernet: latency ~ 40 µs, bandwidth=125mb/s InfiniBand 4x: latency ~ 5 µs, bandwidth =1GB/s Disc performance: Latency: 7-12 ms Bandwidth: ~20MB/sec 60 MB/sec 2
3 UNIX File Access Model (I) A File is a sequence of bytes When a program opens a file, the file system establishes a file pointer. The file pointer is an integer indicating the position in the file, where the next byte will be written/read. Multiple processes can open a file concurrently. Each process will have its own file pointer. No conflicts occur, when multiple processes read the same file. If several processes write at the same location, most UNIX file systems guarantee sequential consistency. (The data from one of the processes will be available in the file, but not a mixture of several processes). 3
4 UNIX File Access Model (II) 4. File system operations Caching and buffering improve performance Avoiding repeated access to the same block Allowing a file system to smooth out I/O behavior Non-blocking I/O gives users control over prefetching and delayed writing Initiate read/write operations as soon as possible Wait for the finishing of the read/write operations just when absolutely necessary. 5
6 Distributed File Systems vs. Parallel File Systems Offer access to a collection of files on remote machines Typically client-server based approach Transparent for the user Concurrent access to the same file from several processes is considered to be an unlikely event in contrary to parallel file systems, where it is considered to be a standard operation Distributed file systems assume different numbers of processors than parallel file systems Distributed file systems have different security requirements than parallel file systems NFS Network File System Protocol for a remote file service Client server based approach Stateless server (v3) Communication based on RPC (Remote Procedure Call) NFS provides session semantics changes to an open file are initially only visible to the process that modified the file File locking not part of NFS protocol (v3) File locking handled by a separate protocol/daemon Locking of blocks often supported Client caching not part of the NFS protocol (v3) depending on implementation E.g. allowing cached data to be stale for 30 seconds 6
7 NFS in a cluster Front-end node hosts the file server NFS in a cluster (II) All file operations are remote operations file server (= NFS server) = bottleneck Extensive usage of file locking required to implement sequential consistency of UNIX I/O Communication between client and server typically uses the slow communication channel on a cluster Do we use several disks at all? Some inefficiencies in the specification, e.g. a read operation involves two RPC operations Lookup file-handle Read request 7
8 Parallel I/O Basic idea: disk striping Stripe factor: number of disks Stripe depth: size of each block Disk striping Requirements for improving disk performance: Multiple physical disks Separate I/O channels to each disk Data transfer to all disks simultaneously Problem of simple disk striping: Minimum stripe depth (sector size) required for optimal disk performance since file size is limited, the number of disks which can be used in parallel is limited as well Loss of a single disk makes entire file useless Risk to loose a disk is proportional to the number of disks used RAID (Redundant Arrays of Independent Disks see lecture 2) 8
9 Parallel File Systems Goals Several process should be able to access the same file concurrently Several process should be able to access the same file efficiently Problems Unix sequential consistency semantics Handling of file-pointers Caching and buffering Concurrent file access logical view Number of compute and I/O nodes need not match Blocks from compute nodes Logical view ( shared file ) I/O nodes Disks 9
10 Concurrent file access opening a file Each I/O node has a subset of the blocks File system needs to look up where the file resides Each I/O node maintains its own directory information or Centralized name service File system needs to look up striping factor (often fixed) Creating a new file file systems has to choose different I/O nodes for holding the first block to avoid contention Concurrent write operations How to ensure sequential consistency? File locking Prevents parallelism even if processes write to different locations in the same file (false sharing) Better: locking of individual blocks Parallel file systems often offer two consistency models Sequential consistency A relaxed consistency model application is responsible for preventing overlapping write-operations 10
11 File pointers In UNIX: every process has a separate file pointer (individual file pointers) Shared file pointers often useful (e.g. reading the next piece of work, writing a parallel log-file) On distributed memory machines: slow, since somebody has to coordinate the file pointer Can be fast on shared memory machines General problems: file pointer atomicity Non blocking I/O Explicit file offset operations: each process tells the file system where to read/write in the file no update to file pointers! Buffering and caching Client buffering: buffering at compute nodes Consistency problems (e.g. one node writes, another tries to read the same data) Server buffering: buffering at I/O nodes Prevents concatenating several small requests to a single large one => produces lots of traffic 11
12 Example for a parallel file system: xfs Anderson et all., 1995 Storage server: storing parts of a file Metadata manager: keeps track of data blocks Client: processes user requests Client Manager Manager Storage server Storage server Storage server xfs continued Communication based on active messages Uses fast networking infrastructure Log-based file system Modifications to a file are written to a log-file and collectively written to disk To find a data block, a separate table (imap) holds inode references to the position in the log-file Log-file is distributed among several processes using RAID techniques Storage servers are organized in stripe groups I.e. not all storage servers are participating in all operations A globally replicated table stores which server is belonging to which stripe group Each file has a manager associated to it Manager map: identifies manager for a specific file 12
13 Starting point: file, data Directory returns file id xfs continued again Manager map returns metadata manager Metadata manager returns exact location of inode in the log: stripe group id, segment id and offset in segment Client computes on which server the block really is File, data Directory fid Manager map imap Stripe group map Client caching in xfs xfs maintains a local block cache Based on block caching Request of write permission transfers the ownership Manager keeps track where a file block is cached Collaborative caching Manager transfer most recent version of a data block directly from one cache into another cache 13
14 xfs versus NFS Issue NFS v3 xfs Design goals Access transparency Server-less system Access model Remote Log-based Communication RPC Active msgs. Client process Thin Fat Server groups No Yes Name space Per client Global Sharing semantics Session UNIX Caching unit Implementation dep. Block Fault tolerance Reliable communication Striping Summary Parallel I/O is a means to decrease the file I/O access times on parallel applications Performance relevant factors Stripe factor Stripe depth Buffering and caching Non-blocking I/O Parallel file systems offer support for concurrent access of processes to the same file Individual file pointers Shared file pointers Explicit offset Distributed file systems are a poor replacement for parallel file systems 14
File System Implementation II
Introduction to Operating Systems File System Implementation II Performance, Recovery, Network File System John Franco Electrical Engineering and Computing Systems University of Cincinnati Review Block
We mean.network File System
We mean.network File System Introduction: Remote File-systems When networking became widely available users wanting to share files had to log in across the net to a central machine This central. File Layout and Directories. Topics. File System Components. Steps to Open A File
Topics COS 318: Operating Systems File Layout and Directories File system structure Disk allocation and i-nodes Directory and link implementations Physical layout for performance 2 File System Components
CS 2630 Computer Organization. Meeting 2: Bits, bytes, and memory Brandon Myers University of Iowa
CS 2630 Computer Organization Meeting 2: Bits, bytes, and memory Brandon Myers University of Iowa Where are we? Compiler Instruction set architecture (e.g., MIPS) translating source code (C or Java) Programs
File-System Implementation
File-System Implementation 11 CHAPTER In this chapter we discuss various methods for storing information on secondary storage. The basic issues are device directory, free space management, and space allocation
Distributed File Systems. Chapter 10
Distributed File Systems Chapter 10 Distributed File System a) A distributed file system is a file system that resides on different machines, but offers an integrated view of data stored on remote disks.
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
CS 153 Design of Operating Systems Spring 2015
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations Physical Disk Structure Disk components Platters Surfaces Tracks Arm Track Sector Surface Sectors Cylinders Arm Heads
How to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
Storing Data: Disks and Files
Storing Data: Disks and Files (From Chapter 9 of textbook) Storing and Retrieving Data Database Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve
COS 318: Operating Systems. File Layout and Directories. Vivek Pai Computer Science Department Princeton University
COS 318: Operating Systems File Layout and Directories Vivek Pai Computer Science Department Princeton University Topics u File system structure
Last class: Distributed File Systems. Today: NFS, Coda
Last class: Distributed File Systems Issues in distributed file systems Sun s Network File System case study Lecture 19, page 1 Today: NFS, Coda Case Study: NFS (continued) Case Study: Coda File System,
Storage Architectures for Big Data in the Cloud
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas
Parallel IO. Single namespace. Performance. Disk locality awareness? Data integrity. Fault tolerance. Standard interface. Network of disks?
PARALLEL IO Parallel IO Single namespace Network of disks? Performance Data replication Multiple I/O paths Disk locality awareness? Data integrity Multiple writers Locking? Fault tolerance Hardware failure
COS 318: Operating Systems. Snapshot and NFS
COS 318: Operating Systems Snapshot and NFS Andy Bavier Computer Science Department Princeton University Topics Revisit Transactions and Logging
Lecture 36: Chapter 6
Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for
Chapter 13. Disk Storage, Basic File Structures, and Hashing
Chapter 13 Disk Storage, Basic File Structures, and Hashing Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible Hashing
The Google File System (GFS)
The Google File System (GFS) Google File System Example of clustered file system Basis of Hadoop s and Bigtable s underlying file system Many other implementations Design constraints Motivating application:
Slide 13-1 Chapter 13 Disk Storage, Basic File Structures, and Hashing Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible
Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition
Chapter 11: File System Implementation 11.1 Silberschatz, Galvin and Gagne 2009 Chapter 11: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation
File Systems: Fundamentals
Files What is a file? A named collection of related information recorded on secondary storage (e.g., disks) File Systems: Fundamentals File attributes Name, type, location, size, protection, creator, creation
FILE SYSTEMS, PART 2. CS124 Operating Systems Winter , Lecture 24
FILE SYSTEMS, PART 2 CS124 Operating Systems Winter 2013-2014, Lecture 24 2 Last Time: Linked Allocation Last time, discussed linked allocation Blocks of the file are chained together into a linked list
Chapter 13 Disk Storage, Basic File Structures, and Hashing.
Chapter 13 Disk Storage, Basic File Structures, and Hashing. Copyright 2004 Pearson Education, Inc. Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered
Cloud Storage. Parallels. Performance Benchmark Results. White Paper.
Parallels Cloud Storage White Paper Performance Benchmark Results Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
CS 464/564 Introduction to Database Management System Instructor: Abdullah Mueen
CS 464/564 Introduction to Database Management System Instructor: Abdullah Mueen LECTURE 14: DATA STORAGE AND REPRESENTATION Data Storage Memory Hierarchy Disks Fields, Records, Blocks Variable-length
Web Email DNS Peer-to-peer systems (file sharing, CDNs, cycle sharing)
1 1 Distributed Systems What are distributed systems? How would you characterize them? Components of the system are located at networked computers Cooperate to provide some service No shared memory Communication
Lecture 18: Reliable Storage
CS 422/522 Design & Implementation of Operating Systems Lecture 18: Reliable Storage Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken from previous versions of
Client/Server and Distributed Computing
Adapted from:operating Systems: Internals and Design Principles, 6/E William Stallings CS571 Fall 2010 Client/Server and Distributed Computing Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Traditional
A Deduplication File System & Course Review
A Deduplication File System & Course Review Kai Li 12/13/12 Topics A Deduplication File System Review 12/13/12 2 Traditional Data Center Storage Hierarchy Clients Network Server SAN Storage Remote mirror
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
File System Management
Lecture 7: Storage Management File System Management Contents Non volatile memory Tape, HDD, SSD Files & File System Interface Directories & their Organization File System Implementation Disk Space Allocation
Operating Systems. Redundant Array of Inexpensive Disks (RAID) Thomas Ropars.
1 Operating Systems Redundant Array of Inexpensive Disks (RAID) Thomas Ropars thomas.ropars@imag.fr 2016 2 References The content of these lectures is inspired by: Operating Systems: Three Easy Pieces
Network File System (NFS)
Network File System (NFS) Brad Karp UCL Computer Science CS GZ03 / M030 10 th October 2011 NFS Is Relevant Original paper from 1985 Very successful, still widely used today Early result; much subsequent
Objectives and Functions
Objectives and Functions William Stallings Computer Organization and Architecture 6 th Edition Week 10 Operating System Support Convenience Making the computer easier to use Efficiency Allowing better
Google File System. Web and scalability
Google File System Web and scalability The web: - How big is the Web right now? No one knows. - Number of pages that are crawled: o 100,000 pages in 1994 o 8 million pages in 2005 - Crawlable pages might
Outline. Database Tuning. Disk Allocation Raw vs. Cooked Files. Overview. Hardware Tuning. Nikolaus Augsten. Unit 6 WS 2015/16
Outline Database Tuning Hardware Tuning Nikolaus Augsten University of Salzburg Department of Computer Science Database Group Unit 6 WS 2015/16 1 2 3 Conclusion Adapted from Database Tuning by Dennis Shasha
File Management Chapters 10, 11, 12
File Management Chapters 10, 11, 12 Requirements For long-term storage: possible to store large amount of info. info must survive termination of processes multiple processes must be able to access concurrently
the Nilfs version 1: overview
the Nilfs version 1: overview Nilfs team NTT Cyber Space Laboratories NTT Corporation nilfs@osrg.net 1 Introduction To enhance reliability of the Linux file system, we adopted
Reliable Adaptable Network RAM
Reliable Adaptable Network RAM Tia Newhall, Daniel Amato, Alexandr Pshenichkin Computer Science Department, Swarthmore College Swarthmore, PA 19081, USA Abstract We present reliability solutions for adaptable
FAWN - a Fast Array of Wimpy Nodes
University of Warsaw January 12, 2011 Outline Introduction 1 Introduction 2 3 4 5 Key issues Introduction Growing CPU vs. I/O gap Contemporary systems must serve millions of users Electricity consumed
Lab 2 : Basic File Server. Introduction
Lab 2 : Basic File Server Introduction In this lab, you will start your file system implementation by getting the following FUSE operations to work: CREATE/MKNOD, LOOKUP, and READDIR SETATTR, WRITE and
CHAPTER 6: DISTRIBUTED FILE SYSTEMS
CHAPTER 6: DISTRIBUTED FILE SYSTEMS Chapter outline DFS design and implementation issues: system structure, access, and sharing semantics Transaction and concurrency control: serializability and concurrency
File Systems Management and Examples
File Systems Management and Examples Today! Efficiency, performance, recovery! Examples Next! Distributed systems Disk space management! Once decided to store a file as sequence of blocks What s the size
The CORFU Hardware Platform. Michael Wei, Mahesh Balakrishnan, John Davis, Dahlia Malkhi, Vijayan Prabhakaran, Ted Wobber
The CORFU Hardware Platform Michael Wei, Mahesh Balakrishnan, John Davis, Dahlia Malkhi, Vijayan Prabhakaran, Ted Wobber 1 The I/O Story? Processors Main Memory Storage 2 The I/O Story Disk Capacity 2011
MICROCOMPUTER BASICS
MICROCOMPUTER BASICS I. Terminology Binary Digit (BIT): basic unit of digital storage, a 0 or 1 Nibble: 4 bits, ½ byte, 1 hex digit Byte: grouping of 8 bits handled as a single unit, has 2 8 = 256 possible
SAM-FS - Advanced Storage Management Solutions for High Performance Computing Environments
SAM-FS - Advanced Storage Management Solutions for High Performance Computing Environments Contact the speaker: Ernst M. Mutke 3400 Canoncita Lane Plano, TX 75023 Phone: (972) 596-8562, Fax: (972) 596-8552
Distributed Systems: Concepts and Design
Distributed Systems: Concepts and Design Edition 3 By George Coulouris, Jean Dollimore and Tim Kindberg Addison-Wesley, Pearson Education 2001. Chapter 2 Exercise Solutions 2.1 Describe and illustrate
Lecture 2: Number Representation
Lecture 2: Number Representation CSE 30: Computer Organization and Systems Programming Summer Session II 2011 Dr. Ali Irturk Dept. of Computer Science and Engineering University of California, San Diego
Distributed File Systems
Distributed File Systems File Characteristics From Andrew File System work: most files are small transfer files rather than disk blocks? reading more common than writing most access is sequential most
Chapter 10: Mass-Storage Systems
Chapter 10: Mass-Storage Systems Physical structure of secondary storage devices and its effects on the uses of the devices Performance characteristics of mass-storage devices Disk scheduling algorithms
The Decimal System. Numbering Systems. The Decimal System. The base-n System. Consider value 234, there are: Ver. 1.4
2 The Decimal System Numbering Systems 2010 - Claudio Fornaro Ver. 1.4 Consider value 234, there are: 2 hundreds 3 tens 4 units that is: 2 x 100 + 3 x 10 + 4 x 1 but 100, 10, and 1 are all powers of 10
Optimizing Performance. Training Division New Delhi
Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,
CHAPTER 17: File Management
CHAPTER 17: File Management The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Filing Systems. Filing Systems
Filing Systems At the outset we identified long-term storage as desirable characteristic of an OS. EG: On-line storage for an MIS. Convenience of not having to re-write programs. Sharing of data in | http://docplayer.net/17526559-Cosc-6374-parallel-computation-parallel-i-o-i-i-o-basics-concept-of-a-clusters.html | CC-MAIN-2019-04 | en | refinedweb |
Flow Control
After this first introduction to C#, we'll examine flow control and control structures. We'll need this information to implement code that is executed only under certain circumstances.
If/Else
Conditional execution is a core component of every programming language. Just like C and C++, C# supports If statements. To see how If statements work, we've implemented a trivial example:
using System; class Hello { public static void Main() { int number = 22; if (number > 20) Console.WriteLine("if branch ..."); else { Console.WriteLine("else branch ..."); } } }
Inside the C# program, we define an integer value. After that the system checks whether the value is higher than 20. If the condition is true, the code inside the If block is executed. Otherwise, the Else branch is called. It's important to mention that the blocks should be marked with parentheses, but this is not a must. Parentheses are normally used to make the code clearer.
When the program is called, one line is displayed:
if branch ...
As we expected, Mono called the If branch.
However, in many real-world scenarios, simple If statements are not enough. It's often useful to combine If statements. When working with Mono and C#, this is no problem:
using System; class Hello { public static int Main(String[] args) { Console.WriteLine("Input: " + args[0]); if (args[0] == "100") { Console.WriteLine("correct ..."); return 0; } else if (args[0] == "0") { Console.WriteLine("not correct ..."); } else { Console.WriteLine("error :("); } return 1; } }
This program is supposed to tell us whether a user has entered a correct number. If 0 is passed to the program, we want a special message to be displayed. Our problem can be solved with the help of else if because it can be used to define a condition inside an If statement. The comparison operator demands some extra treatment. As you can see, we use == to compare two values with each other.
Do not use the = operator for checking whether two values are the same. The = operator is used for assigning values it isn't an operator for comparing values. The C and C++ programmers among you already know about this subject matter.
The way data is passed to the program is important as well. The array called args contains all the values that a user passes to the script. Indexing the array starts with zero. Let's see what happens when we call the program with a wrong number:
[hs@duron mono]$ mono if.exe 23 Input: 23 error :(
In this case, a message is displayed.
Case/Switch Statements
Especially when a lot of values are involved, If statements can soon lead to unclear and hard-to-read code. In this case, working with case/switch statements is a better choice. In the next example, we see how the correct translation of a word can be found:
using System; class Hello { public static int Main() { String inp; String res = "unknown"; // Reading from the keyboard Console.Write("Enter a value: "); inp = Console.ReadLine(); Console.WriteLine("Input: " + inp); // Read the translation switch(inp) { case "Fernseher": res = "TV"; break; case "Honig": res = "honey"; break; case "Geschlecht": case "Sex": res = "sex"; break; } Console.WriteLine("Result: " + res); return 0; } }
First of all, we read a string. To fetch the values from the keyboard, we use the ReadLine method, which is part of the Console object. After reading the value, we call Console.WriteLine and display the value. Now the switch block is entered. All case statements are processed one after the after until the correct value is found.
One thing has to be taken into consideration: A case block is not exited before the system finds a break statement. This is an extremely important concept. If you use switch, case, and break cleverly, it's possible to implement complex decision trees. A good example is the words Geschlecht and Sex. In German, the words are different, but they have the same English translation. Because we do not use a break in the Geschlecht block, C# jumps directly to the Sex block where the correct word is found. In this block, a break statement is used and so the switch block is exited. Many advanced programmers appreciate this feature.
Let's compile and execute the program:
[hs@duron mono]$ mono case.exe Enter a value: Fernseher Input: Fernseher Result: TV [hs@duron mono]$ mono case.exe Enter a value: Geschlecht Input: Geschlecht Result: sex
As you can see, the correct result has been found.
Case/Switch statements also provide default statements. Default values help you to define the default behavior of a block if no proper values are found. Using strings in Switch statements isn't possible in most other languagethat's a real benefit of C#. | http://www.informit.com/articles/article.aspx?p=101325&seqNum=4 | CC-MAIN-2019-04 | en | refinedweb |
CloudShell's OOB Orchestration
Every CloudShell installation includes out of the box workflows. These reflect some common workflows we see across many of our customers that we’ve decided to integrate as default behavior. The OOB setup and teardown processes handle App deployment and startup, connectivity, App discovery and installation. The OOB Save and Restore processes are used for saving the sandbox state and restoring it as a new sandbox. The setup and teardown OOB scripts are included as part of the default blueprint template as of CloudShell 7.1, while the Save and Restore OOB scripts are included starting with CloudShell 9.0.
In this article:
Setup and Teardown OrchestrationOBOBOB.
Save and Restore Orchestration
Note that these orchestration scripts apply to customers who have purchased the Save and Restore paid add-on. Contact your account manager to obtain a license.
Starting with CloudShell 9.0, Save and Restore scripts are provided to support the capability to save and restore sandboxes. They reside in a python package called cloudshell-orch-core. The OOB default blueprint template includes these orchestration scripts and a reference to the cloudshell-orch-core package (required by these scripts) using the requirements.txt mechanism. Here is the implementation of the OOB Save script:
from cloudshell.workflow.orchestration.sandbox import Sandbox sandbox = Sandbox() sandbox.execute_save()
By running the
execute_save method on a sandbox, the script will call a server logic that will create a saved sandbox. For details about the saving process, see the CloudShell Help.
Extending the OOB Save Orchestration Script
You can extend the OOB Save script to execute custom steps before or after the default sandbox save process takes place.
To do this, simply add your custom code before or after the line that executes the Save operation. For example, a Save orchestration script that sends a simple notification email when the Save operation completes:
from cloudshell.workflow.orchestration.sandbox import Sandbox import smtplib sandbox = Sandbox() sandbox.execute_save() # code for sending email notification: server = smtplib.SMTP('smtp.gmail.com', 587) server.ehlo() server.starttls() server.ehlo() #Next, log in to the server server.login("<sender_username>", "<sender_password>") #Send the mail msg = "Sandbox was saved successfully" server.sendmail("<sender_email>", "<target_email>", msg)
Extending the OOB Restore Orchestration Script
You can also extend the OOB Restore script to execute custom functionality at any point during the default sandbox restore process. The Restore script is a part of the sandbox setup process, and actually replaces the setup. Out of the box, the setup and restore logic are identical. However, if you customized the Setup script and you want the same customized script to be launched when restoring a sandbox, you should customize the Restore script as well, as the Restore script is the one that is being launched in a restored sandbox’s setup phase. It is also possible to customize the Restore script to have a different logic than the Setup script, to create a logic that is relevant only for restored sandboxes. For detailed explanations on how to extend the script’s stages and use its extension methods, see the Setup and Teardown Orchestration section above.
For example, a Restore script that writes a message to the Output console before the Restore workflow operation (to extend the workflow operation itself, use the above extension methods in the Extending the OOB Setup Orchestration Scripts section above):
from cloudshell.workflow.orchestration.sandbox import Sandbox from cloudshell.workflow.orchestration.setup.default_setup_orchestrator import DefaultSetupWorkflow sandbox = Sandbox() def func(sandbox, components): sandbox.automation_api.WriteMessageToReservationOutput(sandbox.id, "my custom message") DefaultSetupWorkflow().register(sandbox) sandbox.workflow.add_to_configuration(func, None) sandbox.execute_restore()
As you can see, to use the default orchestration logic, we instantiated the DefaultSetupWorkflow class and registered the sandbox to use the default Setup orchestration) | https://devguide.quali.com/orchestration/9.0.0/the-oob-orchestration.html | CC-MAIN-2019-04 | en | refinedweb |
- Tutorials
- 2D UFO tutorial
- Counting Collectables and Displaying Score
Counting Collectables and Displaying Score
Checked with version: 5.2
-
Difficulty: Beginner
In this assignment we'll add a way for our player to count the collectibles they've picked up, and to display a "You win!" message once they've collected them all.
Counting Collectables and Displaying Score
Beginner 2D UFO tutorial
Transcripts
- 00:04 - 00:06
Now we need a tool to
- 00:06 - 00:09
store the value of our counted collectables.
- 00:10 - 00:12
And another tool to add
- 00:12 - 00:14
to that value as we
- 00:14 - 00:16
collect and count them.
- 00:16 - 00:18
Let's add this tool to our
- 00:18 - 00:20
PlayerController script.
- 00:21 - 00:23
Select the Player game object
- 00:24 - 00:27
and open the PlayerController script for editing.
- 00:31 - 00:34
Let's add a private variable
- 00:35 - 00:37
to hold our count
- 00:37 - 00:39
of collectables we've picked up.
- 00:39 - 00:41
This will be an int,
- 00:41 - 00:43
as our count will be a whole number
- 00:43 - 00:46
we won't be collecting partial objects.
- 00:46 - 00:48
And let's call it Count.
- 00:50 - 00:54
Type private int count;
- 00:55 - 00:57
So in our game we will
- 00:57 - 00:59
first start with a count of 0.
- 01:00 - 01:02
Then we will need to increment
- 01:02 - 01:04
our count value by 1
- 01:04 - 01:06
when we pick up a new object.
- 01:08 - 01:11
We need to set our count value to 0.
- 01:12 - 01:14
As this variable is private
- 01:14 - 01:16
we don't have any access to it
- 01:16 - 01:18
in the inspector.
- 01:18 - 01:20
This variable is only available
- 01:20 - 01:23
for use within this script.
- 01:23 - 01:25
And as such we will need to set
- 01:25 - 01:28
it's starting value here in the script.
- 01:28 - 01:30
There are several ways we can set the starting value
- 01:30 - 01:32
of count but in this assignment
- 01:32 - 01:35
we will do it in the Start function.
- 01:35 - 01:38
In Start set our count
- 01:38 - 01:40
to be equal to 0.
- 01:41 - 01:44
Type count = 0
- 01:45 - 01:47
Next we need to add to count
- 01:47 - 01:50
when we pick up our collectable game objects.
- 01:51 - 01:53
We will pick up our objects
- 01:53 - 01:56
in onTriggerEnter2D
- 01:56 - 01:58
if the other game object
- 01:58 - 02:00
has the tag Pickup.
- 02:00 - 02:03
So this is where we add our counting code.
- 02:04 - 02:06
After setting the other game object's
- 02:06 - 02:08
active state to false
- 02:08 - 02:10
we'll set our new value
- 02:10 - 02:12
for count to be equal to our
- 02:12 - 02:14
old value plus 1.
- 02:16 - 02:21
We'll type count = count + 1.
- 02:23 - 02:25
There are other ways to add,
- 02:25 - 02:27
count up or increment a value
- 02:27 - 02:29
when coding in Unity,
- 02:29 - 02:32
but this one is very easy to understand
- 02:32 - 02:34
and this is the one that we're going to use in this assignment.
- 02:35 - 02:38
Let's save our script and return to Unity.
- 02:42 - 02:44
Now we can store and
- 02:44 - 02:46
increment our count but we have no
- 02:46 - 02:48
way of displaying it.
- 02:48 - 02:50
It would also be good to display a message
- 02:50 - 02:52
when the game is over.
- 02:53 - 02:55
To display text we will be using
- 02:55 - 02:57
Unity's user interface,
- 02:57 - 02:59
or UI toolset.
- 03:00 - 03:02
First let's create a new
- 03:02 - 03:04
UI text element
- 03:04 - 03:07
from the hierarchy's Create menu.
- 03:13 - 03:15
We seem to have gotten more than we've bargained for.
- 03:15 - 03:17
We don't just have a UI text element
- 03:17 - 03:19
but we've also created a
- 03:19 - 03:21
parent canvas element
- 03:21 - 03:25
and an EventSystem game object.
- 03:26 - 03:28
These are all required
- 03:28 - 03:30
by the UI toolset.
- 03:30 - 03:32
The single most important thing
- 03:32 - 03:35
about these additional items is that
- 03:35 - 03:37
all UI elements must
- 03:37 - 03:39
be the child of a canvas
- 03:39 - 03:41
to behave correctly.
- 03:41 - 03:44
For more information on the UI tools
- 03:44 - 03:47
including the canvas and the EventSystem
- 03:47 - 03:50
please see the information linked below.
- 03:51 - 03:53
Let's rename the text element
- 03:53 - 03:55
CountText.
- 04:00 - 04:02
With the CountText still highlighted
- 04:03 - 04:05
place the cursor over the scene view
- 04:05 - 04:07
and press the F key to frame select it.
- 04:09 - 04:11
Let's zoom out so that we can see
- 04:11 - 04:13
where the text is positioned
- 04:13 - 04:15
relative to the entire canvas.
- 04:17 - 04:19
If for some reason this doesn't appear in
- 04:19 - 04:21
the centre then we can use the
- 04:21 - 04:23
context sensitive gear menu
- 04:23 - 04:26
to reset the rect transform.
- 04:28 - 04:30
Now that our text is centred let's
- 04:30 - 04:31
customise this element a bit.
- 04:31 - 04:33
The default text is a bit dark,
- 04:33 - 04:35
let's make the text colour yellow
- 04:35 - 04:37
so it's easier to see.
- 04:38 - 04:40
Click on the colour swatch
- 04:42 - 04:44
in the colour picker let's set
- 04:44 - 04:47
the red value to 255,
- 04:47 - 04:49
the green value to 255 and
- 04:49 - 04:51
the blue value to 0.
- 04:52 - 04:54
You can do this by clicking and dragging
- 04:54 - 04:56
or entering the values numerically.
- 04:57 - 04:58
Close the colour picker.
- 04:58 - 05:00
Now let's add some placeholder text
- 05:00 - 05:02
and replace the string New Text
- 05:02 - 05:04
with Count Text.
- 05:06 - 05:08
Currently the text element
- 05:08 - 05:10
is in the centre of the screen
- 05:10 - 05:12
because it's anchored to the
- 05:12 - 05:14
centre of it's parent, which is
- 05:14 - 05:16
in this case the canvas.
- 05:16 - 05:18
If is worth noting that the
- 05:18 - 05:20
transform component on UI elements
- 05:20 - 05:22
is different from that of
- 05:22 - 05:24
other game objects in Unity.
- 05:25 - 05:27
For UI elements the standard
- 05:27 - 05:29
transform has been replaced
- 05:29 - 05:31
with the rect transform,
- 05:31 - 05:33
which takes in to account
- 05:33 - 05:35
many specialised features
- 05:35 - 05:39
necessary for a versatile UI system
- 05:39 - 05:42
including anchoring and positioning.
- 05:43 - 05:45
For more information on the rect transform
- 05:45 - 05:47
please see the information linked below.
- 05:49 - 05:51
One of the easiest ways to move
- 05:51 - 05:53
the CountText element in to the
- 05:53 - 05:55
upper left is to anchor it
- 05:55 - 05:58
to the upper left corner of the canvas
- 05:58 - 06:00
rather than to it's centre.
- 06:01 - 06:03
When we re-anchor this text
- 06:03 - 06:05
element we also want
- 06:05 - 06:07
to set the pivot and the
- 06:07 - 06:10
position based on the new anchor.
- 06:10 - 06:12
To do this open the
- 06:12 - 06:15
Anchors And Presets menu
- 06:15 - 06:17
by clicking on the button displaying
- 06:17 - 06:19
the current anchor preset.
- 06:20 - 06:24
Hold down the Shift and Option keys on Mac,
- 06:24 - 06:28
or shift and Alt keys on Windows
- 06:28 - 06:33
and select the upper left preset by clicking on it.
- 06:35 - 06:36
That's done it.
- 06:36 - 06:38
Now it looks budged up against
- 06:38 - 06:40
the corner of the game view.
- 06:40 - 06:42
Let's give it some space between the
- 06:42 - 06:44
text and the edges of the screen.
- 06:45 - 06:47
As we're anchored to the upper left
- 06:47 - 06:49
corner of the canvas and we've
- 06:49 - 06:53
set our pivot to the upper left corner as well.
- 06:53 - 06:55
The easiest way to give the text
- 06:55 - 06:57
a little breathing room is to
- 06:57 - 07:00
change the rect transform's
- 07:00 - 07:02
position X and
- 07:02 - 07:04
position Y values.
- 07:05 - 07:08
In this case an X position of 10
- 07:08 - 07:11
and a Y position of -10
- 07:11 - 07:13
seem about right.
- 07:14 - 07:16
This gives us some room around it
- 07:16 - 07:19
yet it's still up and out of the way.
- 07:21 - 07:23
Now let's wire up
- 07:23 - 07:25
the UI text element to
- 07:25 - 07:28
display our count value.
- 07:30 - 07:32
Let's start by opening the
- 07:32 - 07:34
PlayerController script for editing.
- 07:40 - 07:42
Before we can code anything related to
- 07:42 - 07:44
any UI elements we need to tell
- 07:44 - 07:46
our script more about them.
- 07:46 - 07:48
The details about the UI
- 07:48 - 07:50
toolset are held in what's
- 07:50 - 07:52
called a namespace.
- 07:52 - 07:54
We need to use this namespace
- 07:54 - 07:56
just as we are using
- 07:56 - 08:00
UnityEngine and System.Collections.
- 08:00 - 08:03
So to do this, at the top of our script
- 08:03 - 08:09
we'll write using UnityEngine.UI
- 08:09 - 08:11
With this in place we can now
- 08:11 - 08:13
write our code.
- 08:13 - 08:16
First create a new public
- 08:16 - 08:19
text variable called CountText
- 08:20 - 08:22
to hold a reference to the
- 08:22 - 08:24
UI Text component
- 08:24 - 08:26
on our UI Text game object.
- 08:27 - 08:31
Type public Text countText
- 08:32 - 08:35
We need to set the starting value
- 08:35 - 08:39
of the UI text's Text property.
- 08:39 - 08:42
WE can do this in Start as well.
- 08:43 - 08:54
Write countText.text = "Count: " + count.ToString ()
- 08:55 - 08:57
And we need the parenthesis.
- 08:57 - 08:59
Now this line of code
- 08:59 - 09:01
must be written after the line
- 09:01 - 09:03
setting our count value
- 09:03 - 09:06
because Count must have some value
- 09:06 - 09:08
for us to set the text with.
- 09:09 - 09:11
Now we also need to update this text
- 09:11 - 09:13
property every time we pick up
- 09:13 - 09:15
a new collectable,
- 09:15 - 09:17
so in OnTriggerEnter
- 09:17 - 09:19
after we increment our count value
- 09:19 - 09:20
let's write again
- 09:20 - 09:28
countText.Text = "Count: " + count.ToString ()
- 09:29 - 09:32
We've now written the same line of code
- 09:32 - 09:34
twice in the same script.
- 09:35 - 09:37
This is generally bad form.
- 09:37 - 09:39
One way to made this a little more elegant
- 09:39 - 09:41
is to create a function that
- 09:41 - 09:43
does the work in one place
- 09:43 - 09:45
and we simply call this function
- 09:45 - 09:47
every time we need it.
- 09:48 - 09:50
Let's create a new function called
- 09:50 - 09:52
SetCountText.
- 09:52 - 09:56
Type void SetCountText
- 09:56 - 09:58
followed by an empty set of parenthesis
- 09:59 - 10:01
and an empty set of brackets.
- 10:02 - 10:04
Now let's move one
- 10:04 - 10:06
instance of this line of code
- 10:06 - 10:09
in to the function by cutting and pasting it.
- 10:17 - 10:19
And in it's place let's put
- 10:19 - 10:22
a line of code simply calling the function.
- 10:23 - 10:25
Finally let's replace the
- 10:25 - 10:28
other line with the function call as well.
- 10:30 - 10:32
Save and return to Unity.
- 10:36 - 10:38
Now we see our PlayerController script
- 10:38 - 10:40
has a new text property.
- 10:40 - 10:42
We can associate a reference to our
- 10:42 - 10:46
Count Text simply by dragging and dropping
- 10:46 - 10:49
the CountText game object
- 10:49 - 10:51
on to this slot.
- 10:51 - 10:53
Unity will find the text component
- 10:53 - 10:55
on the game object and correctly
- 10:55 - 10:57
associate the reference.
- 10:57 - 10:59
Let's save the scene and play.
- 11:06 - 11:09
Great, now not only do we collect
- 11:09 - 11:11
these objects but we can count them now.
- 11:13 - 11:15
Let's exit play mode.
- 11:15 - 11:17
We need to display a message
- 11:17 - 11:19
when we have collected all of the pickups.
- 11:20 - 11:22
To do this we will need another
- 11:22 - 11:24
UI text object.
- 11:24 - 11:28
Again, using the hierarchy's Create menu
- 11:28 - 11:31
make a new UI text game object.
- 11:32 - 11:34
Rename it WinText.
- 11:37 - 11:40
Note how the new UI text element is
- 11:40 - 11:43
automatically added to our canvas.
- 11:43 - 11:45
We want this text to display
- 11:45 - 11:48
in the centre of the game space
- 11:48 - 11:50
but up a little bit so it doesn't
- 11:50 - 11:52
cover the Player game object.
- 11:53 - 11:56
To do this let's adjust the rect transform's
- 11:56 - 11:59
position Y element as by default
- 11:59 - 12:01
this UI text element is anchored
- 12:01 - 12:03
to the centre of the canvas.
- 12:08 - 12:12
A value of about 75 feels good.
- 12:13 - 12:15
Let's also adjust the
- 12:15 - 12:17
paragraph alignment
- 12:18 - 12:20
in the text component.
- 12:21 - 12:23
Next let's adjust the colour
- 12:24 - 12:26
and set it to the same yellow
- 12:26 - 12:29
we used for our previous text element.
- 12:30 - 12:33
Let's make the text a little larger
- 12:33 - 12:36
by setting the font size to 24.
- 12:36 - 12:39
Finally let's add some place holder text
- 12:39 - 12:42
and replace the string New Text
- 12:42 - 12:44
with Win Text.
- 12:45 - 12:47
Let's save the scene and swap
- 12:47 - 12:49
back to our scripting editor.
- 12:52 - 12:54
We need to add a reference for
- 12:54 - 12:56
this UI text element.
- 12:56 - 12:58
Let's create a new public text
- 12:58 - 13:01
variable and call it WinText.
- 13:03 - 13:07
Type public Text winText.
- 13:09 - 13:11
Now let's set the starting value
- 13:11 - 13:14
for WinText's text property.
- 13:15 - 13:17
This is set to an empty string
- 13:17 - 13:19
or two double quote marks
- 13:19 - 13:21
with no content.
- 13:21 - 13:24
This text property will start empty
- 13:24 - 13:27
then in the SetCountText function
- 13:27 - 13:34
let's write if count is greater than or equal to 12,
- 13:35 - 13:37
which is the total number of objects we have
- 13:37 - 13:39
in the game to collect,
- 13:39 - 13:42
then our winText.Text
- 13:42 - 13:44
equals You Win!
- 13:45 - 13:53
Type if count is greater than or equal to 12
- 13:57 - 14:02
winText.Text = YouWin!
- 14:04 - 14:07
Let's save this script and return to Unity.
- 14:12 - 14:14
Again on our Player our player
- 14:14 - 14:17
controller has a new UI text property.
- 14:17 - 14:19
We can associate the component again
- 14:19 - 14:21
by dragging the WinText
- 14:21 - 14:24
game object in to the new slot.
- 14:25 - 14:27
Let's save the scene and play.
- 14:34 - 14:36
So we're picking up our game objects,
- 14:36 - 14:38
we're counting our collectables
- 14:38 - 14:40
and we win!
- 14:40 - 14:42
And as we can see when we have
- 14:42 - 14:46
collected 12 objects we display the You Win! text.
- 14:46 - 14:48
In the next and last assignment
- 14:48 - 14:50
of this series we will build
- 14:50 - 14:52
the game and deploy it
- 14:52 - 14:55
using a stand alone player.
Code snippet; //Store a reference to the UI Text component which will display the number of pickups collected. public Text winText; //Store a reference to the UI Text component which will display the 'You win' message. private Rigidbody2D rb2d; //Store a reference to the Rigidbody2D component required to use 2D Physics. private int count; //Integer to store the number of pickups collected so far. // Use this for initialization void Start() { //Get and store a reference to the Rigidbody2D component so that we can access it. rb2d = GetComponent<Rigidbody2D> (); //Initialize count to zero. count = 0; //Initialze winText to a blank string since we haven't won yet at beginning. winText.text = ""; //Call our SetCountText function which will update the text with the current value for count.")) //... then set the other object we just collided with to inactive. other.gameObject.SetActive(false); //Add one to the current value of our count variable. count = count + 1; //Update the currently displayed count by calling the SetCountText function. SetCountText (); } //This function updates the text displaying the number of objects we've collected and displays our victory message if we've collected all of them. void SetCountText() { //Set the text property of our our countText object to "Count: " followed by the number stored in our count variable. countText.text = "Count: " + count.ToString (); //Check if we've collected all 12 pickups. If we have... if (count >= 12) //... then set the text property of our winText object to "You win!" winText.text = "You win!"; } }
Related tutorials
- Introduction to 2D UFO Project (Lesson)
- Setting Up The Play Field (Lesson)
- Controlling the Player (Lesson)
- Adding Collision (Lesson)
- Following the Player with the Camera (Lesson)
- Creating Collectable Objects (Lesson) | https://unity3d.com/learn/tutorials/projects/2d-ufo-tutorial/counting-collectables-and-displaying-score?playlist=25844 | CC-MAIN-2019-04 | en | refinedweb |
In this post, I’ll show how to extend the routing logic in ASP.NET Web API, by creating a custom controller selector. Suppose that you want to version your web API by defining URIs like the following:
/api/v1/products/
/api/v2/products/
You might try to make this work by creating two different “Products” controllers, and placing them in separate namespaces:
namespace MyApp.Controllers.V1 { // Version 1 controller public class ProductsController : ApiController { } } namespace MyApp.Controllers.V2 { // Version 2 controller public class ProductsController : ApiController { } }
The problem with this approach is that Web API finds controllers by class name, ignoring the namespace. So there’s no way to make this work using the default routing logic. Fortunately, Web API makes it easy to change the default behavior.
The interface that Web API uses to select a controller is IHttpControllerSelector. You can read about the default implementation here. The important method on this interface is SelectController, which selects a controller for a given HttpRequestMessage.
First, you need to understand a little about the Web API routing process. Routing starts with a route template. When you create a Web API project, it adds a default route template:
“api/{controller}/{id}”
The parts in curly brackets are placeholders. Here is a URI that matches this template:
So in this example, the placeholders have these values:
- controller = products
- id = 1
The default IHttpControllerSelector uses the value of “controller” to find a controller with a matching name. In this example, “products” would match a controller class named ProductsController. (By convention, you need to add the “Controller” suffix to the class name.)
To make our namespace scenario work, we’ll use a route template like this:
“api/{namespace}/{controller}/{id}”
Here is a matching URI:
And here are the placeholder values:
- namespace = v1
- controller = products
- id = 1
Now we can use these values to find a matching controller. First, call GetRouteData to get an IHttpRouteData object from the request:
public HttpControllerDescriptor SelectController(HttpRequestMessage request) { IHttpRouteData routeData = request.GetRouteData(); if (routeData == null) { throw new HttpResponseException(HttpStatusCode.NotFound); } // ...
Use IHttpRouteData to look up the values of “namespace” and “controller”. The values are stored in a dictionary as object types. Here is a helper method that returns a route value as a type T:
private static T GetRouteVariable(IHttpRouteData routeData, string name) { object result = null; if (routeData.Values.TryGetValue(name, out result)) { return (T)result; } return default(T); }
Use this helper function to get the route values as strings:
string namespaceName = GetRouteVariable<string>(routeData, "namespace"); if (namespaceName == null) { throw new HttpResponseException(HttpStatusCode.NotFound); } string controllerName = GetRouteVariable<string>(routeData, "controller"); if (controllerName == null) { throw new HttpResponseException(HttpStatusCode.NotFound); }
Now look for a matching controller type. For example, given “namespace” = “v1” and “controller” = “products”, this would match a controller class with the fully qualified name
MyApp.Controllers.V1.ProductsController.
To get the list of controller types in the application, use the IHttpControllerTypeResolver interface:
IAssembliesResolver assembliesResolver = _configuration.Services.GetAssembliesResolver(); IHttpControllerTypeResolver controllersResolver = _configuration.Services.GetHttpControllerTypeResolver(); ICollection<Type> controllerTypes = controllersResolver.GetControllerTypes(assembliesResolver);
This code performs reflection on all of the assemblies in the app domain. To avoid doing this on every request, it’s a good idea to cache a dictionary of controller types, and use the dictionary for subsequent look ups.
The last step is to replace the default IHttpControllerSelector with our custom implementation, in the HttpConfiguration.Services collection:
config.Services.Replace(typeof(IHttpControllerSelector), new NamespaceHttpControllerSelector(config));
You can find the complete sample hosted on aspnet.codeplex.com.
In order to keep the code as simple as possible, the sample has a few limitations:
- It expects the route to contain a “namespace” variable. Otherwise, it returns an HTTP 404 error. You could modify the sample so that it falls back to the default IHttpControllerSector in this case.
- The sample always matches the value of “namespace” against the final segment of the namespace (i.e., the inner scope). So “v1” matches “MyApp.Controllers.V1” but not “MyApp.V1.Controllers”. You could change this behavior by modifying the code that constructs the dictionary of controller types. (See the
InitializeControllerDictionarymethod.)
Also, versioning by URI is not the only way to version a web API. See Howard Dierking’s blog post for more thoughts on this topic.
Join the conversationAdd Comment
Some time ago I created a library for this: damsteen.nl/…/implementing-versioning-in-asp.net-web-api
Note: I'm the author.
I think versioning in the URL is fine for short-term migrations, but it's not a good long-term strategy. It would be much better to see support in the Web API for content-type versioning (as described by Howard). Changes in the representation of an entity should not affect the URI – that would break the principles of REST.
Sebastian, I noticed the similarity. I am using your component by the way.
In the sample code hosted on codeplex.com, the code works if the webapi is annoted or using the default [HttpGet]. i have some web api that are annotated with [HttpPost], and it is not working.
in the request.GetRouteData().Values[0], i have a value of "MS_SubRoutes". so basically the web api method i am trying to hit, is something like:
[RoutePrefix("api/v2/Blah")]
public class Utility : ApiController
{
[HttpPost]
[Route("Car")]
[Authorize(Roles = AUTHORIZED_ROLES)]
public HttpResponseMessage Cars(Models model)
{
……
}
}
how would i handle this situation?
Have you managed to solve this issue?
Hi Mike,
how can this be applied to a .NET Backend Mobile service..?
I tried but I don't know how to handle the "home page" :/
Thank you 🙂
david
I have the same problem as Sam, I'd like to use use the NamespaceHttpControllerSelector with RouteAttrobutes. Has anyone managed to get this working?
@Tim:
if you are using attribute routing, you can take a look at the following sample for versioning using constraints:
aspnet.codeplex.com/…/latest
FYI regarding customizing controller selector:
blogs.msdn.com/…/customizing-web-api-controller-selector.aspx
In this approach if some action in all version of APIs are same, how we can manage these and prevent redundancy in code ( if we copy action in all version, when we find a bug in these action we have to update all of theme in all version)
I think my problem with versioning is that all of a sudden you have repetitive code / bloating. Lets say there is a USER class and between 1.0 and 1.1 versions there is one member added to that class. If I need to maintain two versions of controller classes at the same time, I will have to support two User classes in my code. If I can use version 1.0 dll to respond to a version 1.0 request and version 1.1 dll to respond to a version 1.1 request, that would be more appropriate.
@Seshu Alluvada & Behrooz
Pardon me if this is an utterly newbie question. In order to deal with the code redundancy, could you not upon development and release of v2 convert v1 to a parent class? Or is this not possible within the MVC architecture?
I tried using your sample code and used the config.Services.Replace(typeof(IHttpControllerSelector), new NamespaceHttpControllerSelector(config)) code but it seems like the ControllerSelector hasn't been set.
The full code is stackoverflow.com/…/web-api-2-selectcontroller-not-working
Should I set the new NamespaceHttpController in the Global.asax instead? | https://blogs.msdn.microsoft.com/webdev/2013/03/07/asp-net-web-api-using-namespaces-to-version-web-apis/ | CC-MAIN-2019-04 | en | refinedweb |
Forums › General › General Chat › The GetPositionList.srv file was not found.
Tagged: ROS_MASTER_URI
- AuthorPosts
There is a GetPositionList in niryo_one_msgs / srv.
When rpi_example_python_api.py is executed, the following error occurs.
niryo@niryo-desktop:~/catkin_ws/src/niryo_one_python_api/examples$ python rpi_example_python_api.py
Traceback (most recent call last):
File “rpi_example_python_api.py”, line 4, in <module>
from niryo_one_python_api.niryo_one_api import *
File “/home/niryo/catkin_ws/src/niryo_one_python_api/src/niryo_one_python_api/niryo_one_api.py”, line 34, in <module>
from niryo_one_msgs.srv import GetPositionList
ImportError: cannot import name GetPositionList
I want to know why an error occurs.
Edouard RenardKeymasterJuly 10, 2018 at 4:51 pmPost count: 133
I have two problems.
1. After catkin_make -j2, the following error occurred.
Unable to register with master node [http: // localhost: 11311]: master may not be running. Will keep trying.
Changing the localhost part did not change anything.
2. Robot’s LED does not change from red to blue.
I waited.
I have been waiting a day.
Is it related to the above error ??
Edouard RenardKeymasterJuly 12, 2018 at 3:15 pmPost count: 133
Could you explain the exact steps you followed before you got this error ?
Also, yes, for your question 2., the LED will not switch to blue, because it seems that the Niryo One ROS stack is not running in the first place.
Could you also try to flash a new microSD card with the 1.1.0 Rpi Image (you can download it here), and see if everything is working well ?
- AuthorPosts
You must be logged in to reply to this topic. | https://niryo.com/forums/topic/the-getpositionlist-srv-file-was-not-found/ | CC-MAIN-2019-04 | en | refinedweb |
In this tutorial, we will check how we can get humidity measurements from a DHT22 sensor, with the Arduino core running on the ESP32. To make the interaction with the sensor easier, I’m using a DFRobot DHT22 module which has all the additional electronics needed and exposes a wiring terminal that facilitates the connections.
Introduction
In this tutorial, we will check how we can get humidity measurements from a DHT22 sensor, with the Arduino core running on the ESP32.
The DHT22 is a temperature and humidity sensor and you can check on the previous tutorial how to get temperature measurements.
We will use this library to interact with the device. It is available for installation using the Arduino IDE libraries manager and the procedure to install it is detailed on the mentioned post.
For the schematic diagram needed to connect the ESP32 to the DHT22, please also consult the mentioned previous post.
To make the interaction with the sensor easier, I’m using a DFRobot DHT22 module which has all the additional electronics needed and exposes a wiring terminal that facilitates the connections.
The tests were performed using a DFRobot’s ESP32 module integrated in a ESP32 development board.
The code
The code for this tutorial will be really simple and it is pretty much similar to what we did to obtain temperature measurements.
The first thing we need to do is including the DHT library, so we have access to all the higher level functions that allow us to interact with the sensor without needing to worry about the single wire protocol it uses to exchange data with a microcontroller.
#include "DHTesp.h"
Then we need an object of class DHTesp, which exposes the methods needed to get both temperature and humidity measurements. In our specific case, we will use it to get humidity.
DHTesp dht;
Moving on to the Arduino setup, we will first open a serial connection to output the measurements and get them later using the Arduino IDE serial monitor.
Serial.begin(115200);
Besides that, we need to initialize the sensor interface and specify the ESP32 pin to which the sensor is connected. We do this by calling the setup method of the dht object we created before, passing as input the pin number. I’m using pin 27, but you can try with another.
Note that the setup function can take as optional argument an enumerated value specifying which sensor we are using (the library supports multiple ones). Nonetheless, if we don’t pass this argument, the function will try to automatically detect the sensor used.
dht.setup(27);
Moving on to the Arduino loop, we will periodically get the humidity measurements and print them to the serial interface.
To get a humidity measurement, we simply need to call the getHumidity method on our dht object. This function call takes no arguments and returns the humidity in percentage, as a float.
float humidity = dht.getHumidity();
We will then print the value to the serial interface and make a 10 seconds delay before getting the next measurement.
Serial.print("Humidity: "); Serial.println(humidity); delay(10000);
The final source code can be seen below.
#include "DHTesp.h" DHTesp dht; void setup() { Serial.begin(115200); dht.setup(27); } void loop() { float humidity = dht.getHumidity(); Serial.print("Humidity: "); Serial.println(humidity); delay(10000); }
Testing the code
To test the code, simply compile it and upload it to your device after having all the components wired accordingly to the schematic from the previous post.
Once the procedure finishes, open the Arduino IDE serial monitor. There, you should have an output similar to figure 1, which shows the measurements getting printed.
Figure 1 – Output of the program.
7 Replies to “ESP32 Arduino: Getting humidity measurements from a DHT22 sensor”
So I copied this sketch (and the temperature sketch) and tried to compile it and received these error messages (in both cases):
Arduino: 1.8.5 (Mac OS X), Board: “ESP32 Dev Module, Disabled, Default, QIO, 80MHz, 4MB (32Mb), 115200, None”
/Users/michaelreid/Documents/Arduino/DHTesp_tester/DHTesp_tester.ino: In function ‘void setup()’:
DHTesp_tester:9: error: call of overloaded ‘setup(int)’ is ambiguous
dht.setup(25);
^
In file included from /Users/michaelreid/Documents/Arduino/DHTesp_tester/DHTesp_tester.ino:1:0:
/Users/michaelreid/Documents/Arduino/libraries/DHT_sensor_library_for_ESPx/DHTesp.h:125:8: note: candidate: void DHTesp::setup(uint8_t)
void setup(uint8_t dhtPin) __attribute__((deprecated));
^
/Users/michaelreid/Documents/Arduino/libraries/DHT_sensor_library_for_ESPx/DHTesp.h:126:8: note: candidate: void DHTesp::setup(uint8_t, DHTesp::DHT_MODEL_t)
void setup(uint8_t pin, DHT_MODEL_t model=AUTO_DETECT);
^
exit status 1
call of overloaded ‘setup(int)’ is ambiguous
Any suggestions for resolving this error?
LikeLiked by 1 person
Hi!
I’m not sure what is causing the problem. In my environment it compiles fine.
My suggestion is to open an issue on the GitHub page of the libraries to check if someone can help.
Let us know if you manage to solve the problem 🙂
Best regards,
Nuno Santos
I had the same error. I fixed it by avoiding to use the AUTO_DETECT parameter, but specifying the used sensor in the dht.setup() method.
Try to pass the used sensor like this: dht.setup(25, DHTesp::DHT22). Or use one of these (depending on your used sensor):
DHT11,
DHT22,
AM2302, // Packaged DHT22
RHT03 // Equivalent to DHT22
Best regards
Richard
LikeLiked by 1 person
Dear Mr. Richard
thanks for information, just i have one question for you if i have DHT21 what i can do?
because when i compile i have error “DHT21′ is not a member of ‘DHTesp”
and for DHT22 it compile whith out any problem .
thanks for your help.
best regards. | https://techtutorialsx.com/2018/04/20/esp32-arduino-getting-humidity-measurements-from-a-dht22-sensor/ | CC-MAIN-2019-04 | en | refinedweb |
I have an MVC program that is uploading data from a .csv file to a SQL database. I am now trying to display the data uploaded with a WebGrid table. All the examples that I have seen demonstrate only displaying one complete table at a time.
I am new to using MVC and WebGrid, so first of all I was wondering if this was the right approach to this problem, and secondly, if this approach is the best route, how will I have to set up the Views to display data from 3 different tables. Will it require 3 different controllers & 3 different views, and will I have to have multiple Data Models? Any input would be really appreciated.
Here are some MVC best practices:
Your view model should be what you want to display on the view
This seems obvious, but at first it is not. Start by creating your view model, and when you're doing that assume that you know nothing about your data store.
Your view model should not know/care how your data is stored
The view's job is just to display some data, that is all. Your view model should be something like this:
public class ViewModel123 { public int ID {get;set;} public string foo {get;set;}//this may come from table A or table B, it does not matter public string bar {get;set;}//this may come from table A or table B, it does not matter }
Create a data access layer
This is the layer that gets data out of the database from N number of tables. Assuming you're using EF or linq-to-sql the method to get the data would look something like this:
public IEnumerable<ViewModel123> GetData() { return DatabaseHande.SomeThing.Select(x=> new ViewModel123 { ID = x.id, Foo = x.Foo, Bar = x.LinkTable.Bar }); }
Have your controller call your data access layer and return the view model
You controller can now call the data access layer and return the view model to the view. More pseudo code:
public ActionResult List() { var viewModelDataRows = _dataAccessClass.GetData(); return View(viewModelDataRows); } | http://www.dlxedu.com/askdetail/3/0e20c366dfcfa5bab3d4479d4f02a2ec.html | CC-MAIN-2019-04 | en | refinedweb |
Graphite is a great graphing system with a very simple API for importing data, and a lot of support from other tools.
There are two parts to a Graphite installation:
- “Carbon” which is the process that handles receiving and storing data
- “graphite-web” which provides a front-end and HTTP API
Graphite-web is pretty complex to install however – especially if you have minimal python knowledge – with a number of dependencies (e.g. django, MySQL) and associated configuration. It’s also not the most elegant application to use.
As a result, a number of other front ends have been developed, one of which is the excellent Grafana. Using alternative front-ends means you only really need the HTTP API from Graphite, and not the whole web application (with django etc), but the main Graphite project doesn’t support installing just this element. There is however a project on Github that aims to provide just this – graphite-api.
This blog post will cover how to install carbon, graphite-api, and finally Grafana v1
Installing Carbon
Carbon can be install using apt:
apt-get install graphite-carbon
Once installed you should be able to start it with the standard “service carbon-cache start”. This will silently fail however, because for some inexplicable reason, the package is configured by default to be disabled, and the init script only reports this if it is in “verbose” mode, which again by default it isn’t. So the default install will just silently fail to do anything!
To fix this, edit /etc/default/graphite-carbon and change the line below to true:
CARBON_CACHE_ENABLED=true
Then “service carbon-cache start” should start the service.
Check carbon is running with the following python script:
import time import socket sock = socket.socket() sock.connect( ("localhost", 2003) ) sock.send("test.metric 50 %d \n" % time.time()) sock.close()
If this returns a “socket.error”, check if carbon is running with “ps -ef | grep carbon”, and check for errors in /var/log/carbon/console.log
Installing graphite-api
Follow the “Python” instructions at
Install required dependencies
apt-get install python python-pip build-essential python-dev libcairo2-dev libffi-dev
And then graphite-api
pip install graphite-api
This will download and compile graphite-api. If you have cryptic errors about “gcc”, check you have installed “build-essential” and all the required “*-dev” libraries. Depending on your system, you may also need to install other dependencies, but “apt” should take care of this for you.
Configure carbon
Once installed you need to create the configuration file. Graphite-api will run without a config file, but the default file locations are different to what graphite-carbon used so we need to manually specify them.
Create “/etc/graphite-api.yml” with the following contents.
search_index: /var/lib/graphite/index finders: - graphite_api.finders.whisper.WhisperFinder functions: - graphite_api.functions.SeriesFunctions - graphite_api.functions.PieFunctions whisper: directories: - /var/lib/graphite/whisper carbon: hosts: - 127.0.0.1:7002 timeout: 1 retry_delay: 15 carbon_prefix: carbon replication_factor: 1
If you want to change the data locations, ensure you edit “/etc/carbon/carbon.conf” as well to match.
Deployment
Graphite-api doesn’t install a daemon like carbon, it needs to be run inside a web server. There are several options documented on the website. The simplest (although not most performant) is probably to use Apache and mod_wsgi
apt-get install libapache2-mod-wsgi
Then just follow the documented instructions.
Create /var/www/wsgi-scripts/graphite-api.wsgi
# /var/www/wsgi-scripts/graphite-api.wsgi from graphite_api.app import app as application
And /etc/apache2/sites-available/graphite.conf
# /etc/apache2/sites-available/graphite.conf LoadModule wsgi_module modules/mod_wsgi.so WSGISocketPrefix /var/run/wsgi Listen 8013 <VirtualHost *:8013> WSGIDaemonProcess graphite-api processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite-api WSGIApplicationGroup %{GLOBAL} WSGIImportScript /var/www/wsgi-scripts/graphite-api.wsgi process-group=graphite-api application-group=%{GLOBAL} WSGIScriptAlias / /var/www/wsgi-scripts/graphite-api.wsgi <Directory /var/www/wsgi-scripts/> Order deny,allow Allow from all </Directory> </VirtualHost>
Then symlink this into /etc/apache2/sites-enabled/
ln -s ../sites-available/graphite.conf .
Finally restarting Apache:
service apache2 restart
This should start graphite-api on port 8013.
You can check this by browsing to http://<IP_OF_PI >:8013/render?target=test.metric
This should return a fairly dull graph showing the data entered using the basic test python script above. If you get a image back that says “No Data”, check you have run the test python above successfully, and that your data paths in /etc/carbon/carbon.conf and “/etc/graphite-api.yml” match.
Any errors will be logged into the standard Apache error log at /var/log/apache2/error.log
Installing Grafana
The final step is to install the “Grafana” frontend. The original Grafana is a pure HTML5 application that connected directly to the graphite API and didn’t require anything other than a webserver to host the pages. Grafana 2 has now been released, which as well as connecting to Graphite, also provides it’s own backend that is written in Go.
There aren’t prebuilt packages of Grafana 2 available for the Raspberry Pi, and building it from source would be quite a bit of time and hassle (if it’s even possible), so I’d recommend sticking to Grafana 1. The main limitation of Grafana 1 is being unable to directly save dashboards from the GUI. To save a dashboard, you will need to copy the JSON for it from the GUI, and save it manually as a file on the Pi to “/var/www/grafana/app/dashboards/”
Installation
- Download the latest 1.x release from
- Unzip this into /var/www/grafana
- copy “config.sample.js” to “config.js” and edit the datasources section to point at your graphite instance above. This is likely to be
- Open a browser and point it at: and you should get the Grafana UI.
- If you don’t get this, check your Javascript console log for errors or typos in config.js file
Explaining how to use Grafana is out of scope of this blog post, but have fun graphing all your r-pi stats! | https://markinbristol.wordpress.com/2015/09/20/setting-up-graphite-api-grafana-on-a-raspberry-pi/ | CC-MAIN-2019-04 | en | refinedweb |
I hate waking up in winter with an alarm when everything is still dark and gloomy, and would much prefer to wake up more naturally with light. You can buy various “daylight alarms”, but they are just more clutter to have in the room, and it felt unnecessary to buy something when the room already has a perfectly good light hanging from the ceiling. I just needed a way to control it.
There are various WiFi enabled light bulbs around, but they all have the same basic flaw, that if the wall switch is turned off, no wifi in the world is going to turn the bulb on again. This means you would need to always use a phone/remote to control the light, rather than being able to use a normal switch as well.
Eventually I came across “LightwaveRF” units, which replace the switch with a dimmer, and then you use a normal dimmable bulb. The switches are about £30, but to connect it to a network you also need their wifi link, which is £50. This would push the price up to £80, which isn’t too crazy compared to the price of some wifi bulbs, but I wanted to do it cheaper than this, and learn something about using the GPIO pins on the Pi as well.
Fortunately the RF signal the panels use is a standard 433Mhz, and you can get transmitters for this frequency for the huge cost of £1.
All I needed now was to find out exactly what signal to transmit to control the panels from the Pi. Fortunately all the hard work has been done by someone else: This github project provides C libraries for the Arduino and Pi to transmit and receive using the LightwaveRF protocol. It also provides python bindings which is perfect.
Hardware
Obviously first replace your existing light switch with the Lightwave one. This was a bit of hassle because it’s deeper than a normal panel, so you might need to excavate the wall a bit to get it to fit.
Then connect the 5v (vcc), data and ground pins to the Pi, noting which pin on the Pi you connect the data to. If you’re not sure which pins on the Pi are which, refer to this website.
Pigpio
LightwaveRF has a dependency on “pigpio” which is a C library used to control the GPIO pins on the Pi. Follow the pigpio instructions to download and install this. If you get errors when running ‘make’ to build this, check you have the necessary python packages:
sudo apt-get install build-essential
You should be able to install any other missing packages using ‘apt’ as well.
This will install the pigpio C libraries, a daemon – ‘pigpiod’ – that runs in the background, and a python library that can be ‘import’ed into scripts.
Once installed, start the daemon by running: ‘pigpiod’. If it starts OK it will just silently return.
LighwaveRF
Create a location somewhere on your pi, and copy the ‘lwrf.py‘ file from the github project into it.
Then create a test file with the below contents in the same location:
import sys import pigpio import lightwaverf.lwrf # This is a simple test class for the lwrf and pigpiod programs. # The GPIO pin on the Pi you've connected the transmitter to. # You probably need to change this! gpio_pin = 7 # How often to repeat the signal, 3 seems to be OK. repeat = 3 # An ID that must be unique for each dimmer. id = 1 pi = pigpio.pi() # Connect to GPIO daemon. tx = lightwaverf.lwrf.tx(pi, gpio_pin) # this should be between 0 and 32 value = int(sys.argv[1]) if (value == 0): tx_val = 64 # according to the LightwaveRF docs, when turning off, this should be 64. c = 0 # "command" setting i.e. on/off else: tx_val = value + 128 c = 1 a = tx_val >> 4 # first 4 bits b = tx_val % 16 # last 4 bits data = [a, b, 0, c, 15, id, 0, 0, 0, 0] tx.put(data, repeat) print("Sent " + str(value)) tx.cancel(); pi.stop();
Edit the file with the ‘gpio_pin’ you connected the transmitter to, the other values can be left as they are.
Test this runs OK this with python, supplying an example brightness:
python test.py 10 Sent 10
If you get errors, check that that the pigpiod daemon is running.
Before it will actually do anything, you need to pair the transmitter with the panel. LightwaveRF panels don’t have their own unique addresses, instead they need to be given an ID to respond to. Each panel can remember up to 6 IDs and they will then respond to any signals transmitted with that ID.
To put the panels into “learning” mode, press and hold both panel buttons until the orange and blue lights start flashing alternately. This “learning” mode lasts for about 15sec, so when the lights are still flashing, run the script above again. The blue light only should then flash to indicate it has paired successfully. Refer to the LightwaveRF dimmer manual for more details.
Now running the python script again (with an argument between 0 and 32) should actually control the light!
Of course having to boot a laptop, ssh into a Pi and run some python is somewhat inconvenient just to turn a light on. I’ve written a very simple website that can be used to control the light. | https://markinbristol.wordpress.com/2015/12/ | CC-MAIN-2019-04 | en | refinedweb |
Hey there everybody :D so I have a problem with this code.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class AnimationPlay : MonoBehaviour {
private Animator animator;
private Vector3 oldPos,newPos;
void Start()
{
oldPos = GameObject.Find ("FPSController").transform.position;
animator = gameObject.GetComponent<Animator> ();
}
void Update ()
{
newPos = GameObject.Find ("FPSController").transform.position;
if (newPos != oldPos)
{
animator.SetBool ("Moving", true);
}
else if (newPos == oldPos)
{
animator.SetBool ("Moving", false);
}
oldPos = GameObject.Find ("FPSController").transform.position;
}
}
I'm trying to detect if the game object "FPSController" is moving by using transform.position and see if it's position has changed from the last time and if it is moving, I will set the Boolean "Moving" true which I created in an animator for game object "Player" (("Player" is a child of "FPSController")) and then it should play walking animation that i made in blender. when my character moves, the Boolean keeps flickering true and false and also the walking animation gets too long to play like 6 or 7 seconds after player moves. I have no idea what is causing this, cuz in theory it should work... in the picture "FPSController" is moving, the "Moving" Boolean is flickering, and the Standing animation is playing instead of walking animatin...
am i missing something? and thanks for your time :)
You could use the difference between the two states and then magnitude it to get the velocity. You can then set the walking animation while the velocity is greater than a threshold (eg 0.05). But it is a bit of a hack. Why dont you directly trigger the animation from within the FPSController?
i tried using the velocity and it didn't change the result and about triggering the animation in FPScontroller, you mean like animator.play()?!
Similar to your "Moving" boolean you can create a Trigger. Then you can assign this trigger as a transition condition from Standing to Walking. You force this transition by calling Animation.SetTrigger() in the FPSController, e.g. directly when you apply force/motion to your character.
Try turning off "has exist.
2 problems with new animation / animator,2 problems with animation
0
Answers
Simple Animation and Scripting Problem.
1
Answer
2D Movement and getting the mouse position
0
Answers
Wandering AI "ignores" speed
2
Answers
How to check if animator is in crossfade
1
Answer | https://answers.unity.com/questions/1588327/having-problem-with-detecting-movement.html | CC-MAIN-2019-04 | en | refinedweb |
docopt-godocopt-go
An implementation of docopt in the Go programming language.
docopt helps you create beautiful command-line interfaces easily:
package main import ( "fmt" "github.com/docopt/docopt-go" ) func main() { usage := .` arguments, _ := docopt.ParseDoc(usage) fmt.Println(arguments) }
docopt parses command-line arguments based on a help message. Don't write parser code: a good help message already has all the necessary information in it.
InstallationInstallation
⚠ Use the alias "docopt-go". To use docopt in your Go code:
import "github.com/docopt/docopt-go"
To install docopt in your
$GOPATH:
$ go get github.com/docopt/docopt-go
APIAPI
Given a conventional command-line help message, docopt processes the arguments. See for a description of the help message format.
This package exposes three different APIs, depending on the level of control required. The first, simplest way to parse your docopt usage is to just call:
docopt.ParseDoc(usage)
This will use
os.Args[1:] as the argv slice, and use the default parser options. If you want to provide your own version string and args, then use:
docopt.ParseArgs(usage, argv, "1.2.3")
If the last parameter (version) is a non-empty string, it will be printed when
--version is given in the argv slice. Finally, we can instantiate our own
docopt.Parser which gives us control over how things like help messages are printed and whether to exit after displaying usage messages, etc.
parser := &docopt.Parser{ HelpHandler: docopt.PrintHelpOnly, OptionsFirst: true, } opts, err := parser.ParseArgs(usage, argv, "")
In particular, setting your own custom
HelpHandler function makes unit testing your own docs with example command line invocations much more enjoyable.
All three of these return a map of option names to the values parsed from argv, and an error or nil. You can get the values using the helpers, or just treat it as a regular map:
flag, _ := opts.Bool("--flag") secs, _ := opts.Int("<seconds>")
Additionally, you can
Bind these to a struct, assigning option values to the exported fields of that struct, all at once.
var config struct { Command string `docopt:"<cmd>"` Tries int `docopt:"-n"` Force bool // Gets the value of --force } opts.Bind(&config)
More documentation is available at godoc.org.
Unit TestingUnit Testing
Unit testing your own usage docs is recommended, so you can be sure that for a given command line invocation, the expected options are set. An example of how to do this is in the examples folder.
TestsTests
All tests from the Python version are implemented and passing at Travis CI. New language-agnostic tests have been added to test_golang.docopt.
To run tests for docopt-go, use
go test. | https://go.ctolib.com/docopt-go.html | CC-MAIN-2019-04 | en | refinedweb |
Note: as of the time of writing, XCode has been installed on one of the Macs in 213. Hopefully it will be installed on both, soon.
First, check to see if your computer already has XCode installed. Open up the Terminal application, and type
gcc -v
If you see a message like
-bash: gcc: command not found
then you will need to install XCode. There are two ways to install the software:
Once you install the software, typying gcc -v should give you something like this:
Using built-in specs. Target: i686-apple-darwin10 Configured with: /var/tmp/gcc/gcc-5664~105)
You can use the XCode application as a text editor, but I reccomend starting out with something a little smaller. Some of our students have used Aquamacs for Verilog and liked it well enough.
Create a folder for your programs in your Documents folder or
on your Desktop. Let's use
#include <stdio.h> int main(int argc, char** argv) { printf("Hello, world!\n"); return 0; }
Now it's time to compile your program. Open up the Terminal application and type in (substituting whatever path you chose to put the source file in):
cd ~/Desktop. | http://www.swarthmore.edu/NatSci/mzucker1/e15/c-instructions-mac.html | CC-MAIN-2018-05 | en | refinedweb |
Homework 3
Due by 11:59pm on Friday, 2/26
Instructions
Download hw03.zip. Inside the archive, you will find a file called hw03.py, along with a copy of the OK autograder.
Submission: When you are done, submit with
python3 ok
--submit. You may submit more than once before the deadline; only the
final submission will be scored. See Lab 0 for instructions on submitting
assignments.
Using OK: If you have any questions about using OK, please refer to this guide.
Readings: You might find the following references useful:
Required questions
Sequences
The following three problems were optional in lab. If you haven't already done so, complete them (otherwise, copy your result from lab).
Question 1: Flatten
Mergesort
Question
Trees
The following problems use the same
tree data abstraction as lecture, but
for brevity, we've renamed
make_tree as
tree.
The code you write should not apply the
[] operation to trees directly;
that's an abstraction barrier violation.
The
print_tree function is provided for convenience:
############################################################## # An alternative implementation of the tree data abstraction # ############################################################## def tree(label, children=[]): for branch in children: assert is_tree(branch), 'children must be trees' return (label, children) def label(tree): return tree[0] def children(tree): return tree[1] def is_tree(tree): if type(tree) is not tuple or len(tree) != 2 \ or (type(tree[1]) is not list and type(tree[1]) is not tuple): return False for branch in children(tree): if not is_tree(branch): return False return True def is_leaf(tree): return not children(tree) def print_tree(t, indent=0): """Print a representation of this tree in which each node is indented by two spaces times its depth from the label. >>> print_tree(tree(1)) 1 >>> print_tree(tree(1, [tree(2)])) 1 2 >>> numbers = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])]) >>> print_tree(numbers) 1 2 3 4 5 6 7 """ print(' ' * indent + str(label(t))) for child in children(t): print_tree(child, indent + 1) 5 children children 6
Simplifying Expressions
Question 7
In lecture, you saw that one use of trees is in representing expressions
(such as arithmetic expressions). So, for example, the expression
2 * (3 + x) can be represented as the tree
That is, each operand is a child of the operator that applies to it.
In lecture, we looked at evaluating an expression that contains only numbers
and operators. For this problem, we'll work at simplifying an expression
that may contain variables without necessarily evaluating it. For example,
2 * (x + 0) + y * 0 could be simplified to
2 * x.
For this problem, the only operators are
*,
+, and
- (as strings), and
the labels of leaves
will either be numbers or strings containing variable names. Thus, our first
example would be represented with
tree('*', [tree(2), tree('+', [tree(3), tree('x')])])
To help you, we've defined a few useful things that may come in handy:
# Alternative names of parts of an expression tree. def left_opnd(expr): return children(expr)[0] def right_opnd(expr): return children(expr)[1] def oper(expr): return label(expr) # Useful constants: ZERO = tree('0') ONE = tree('1') def same_expr(expr0, expr1): """Return true iff expression trees EXPR0 and EXPR1 are identical.""" if oper(expr0) != oper(expr1): return False elif is_leaf(expr0): return True else: return same_expr(left_opnd(expr0), left_opnd(expr1)) and \ same_expr(right_opnd(expr0), right_opnd(expr1)) def postfix_to_expr(postfix_expr): """Return an expression tree equivalent to POSTFIX_EXPR, a string in postfix ("reverse Polish") notation. In postfix, one writes E1 OP E2 (where E1 and E2 are expressions and OP is an operator) as E1' E2' OP, where E1' and E2' are the postfix versions of E1 and E2. For example, '2*(3+x)' is written '2 3 x + *' and '2*3+x' is `2 3 * x +'. >>> print_tree(postfix_to_expr("2 3 x + *")) * 2 + 3 x """ def expr_to_infix(expr): """A string containing a standard infix denotation of the expression tree EXPR"""
Implement the function
simplify on these trees. Given an expression tree
expr, this function returns a new expression tree, simplified from
expr by
applying the following rules:
- the expressions
E * 0and
0 * E, where
Ehere can be any expression tree, are replaced by
0.
- the expressions
E * 1and
1 * Eare replaced by
E.
- the expressions
E + 0,
0 + E, and
E - 0are replaced by
E.
- the expression
E - E(where the two operands are identical trees) is replaced by
0.
These simplifications may cause a cascade, as in
y * (x - (0 + x)) which
simplifies to
y * (x - x), then to
y * 0, and then to
0. In order for
that to work, you must be careful to simplify operands before simplifying the
whole expression.
def simplify(expr): """EXPR must be an expression tree involving the operators '+', '*', and '-' in inner nodes; numbers and strings (standing for variable names) in leaves. Returns an equivalent, simplified version of EXPR. >>> def simp(postfix_expr): ... return expr_to_infix(simplify(postfix_to_expr(postfix_expr))) >>> simp("x y + 0 *") '0' >>> simp("0 x y + *") '0' >>> simp("x y + 0 +") '(x + y)' >>> simp("0 x y + +") '(x + y)' >>> simp("x y + 1 *") '(x + y)' >>> simp("1 x y + *") '(x + y)' >>> simp("x y + x y + -") '0' >>> simp("x y y - + x - a b * *") '0' >>> simp("x y 3 * -") '(x - (y * 3))' >>> simp("x y 0 + 3 * -") '(x - (y * 3))' """ "*** YOUR CODE HERE ***"
Use OK to test your code:
python3 ok -q simplify
Extra questions
Extra questions are not worth extra credit and are entirely optional. They are designed to challenge you to think creatively!
Question 8
The well-known Eight Queens Problem is to place eight chess queens
on an 8x8 chessboard in such a way that none of them attack any of the
others. Queens in chess can move (and attack) any number of squares
horizontally, vertically, or diagonally. Your problem is to complete
the
place_queens function to find such a configuration of N queens
for a board of any size NxN (if the configuration exists). This function
returns a list containing, for each row, the position of the queen in
that row (the conditions of the problem guarantee that there must be
one queen in each row), or
None if no such configuration exists.
def place_queens(size): """Return a list. p, of length SIZE in which p[r] is the column in which to place a queen in row r (0 <= r < SIZE) such that no two queens are attacking each other. Return None if there is no such configuration. >>> place_queens(2) == None True >>> place_queens(3) == None True >>> check_board(4, place_queens(4)) True >>> check_board(8, place_queens(8)) True >>> check_board(14, place_queens(14)) True """ "*** YOUR CODE HERE ***" def check_board(n, cols): """Check that COLS is a valid solution to the N-queens problem (N == len(COLS)). COLS has the format returned by place_queens.""" if cols is None: return False if n != len(cols): return False if set(cols) != set(range(n)): return False if n != len(set([ r + c for r, c in enumerate(cols) ])): return False if n != len(set([ r - c for r, c in enumerate(cols) ])): return False return True def print_board(cols): """Print a board, COLS, returned by place_queens (as a list of column positions of queens for each row).""" if cols is None: print("No solution") else: for c in cols: print("- " * c + "Q " + "- " * (len(cols) - c - 1)) """Example: > print_board(place_queens(5)) Q - - - - - - Q - - - - - - Q - Q - - - - - - Q - """
Use OK to test your code:
python3 ok -q place_queens | https://inst.eecs.berkeley.edu/~cs61a/sp16/hw/hw03/ | CC-MAIN-2018-05 | en | refinedweb |
Introduction
In this post, I’m going to talk about basic dependency injection and mocking a method that is used to access hardware. The method I’ll be mocking is the System.IO.Directory.Exists().
Mocking Methods
One of the biggest headaches with unit testing is that you have to make sure you mock any objects that your method under test is calling. Otherwise your test results could be dependent on something you’re not really testing. As an example for this blog post, I will show how to apply unit tests to this very simple program:
class Program { static void Main(string[] args) { var myObject = new MyClass(); Console.WriteLine(myObject.MyMethod()); Console.ReadKey(); } }
The object that is used above is:
public class MyClass { public int MyMethod() { if (System.IO.DirectoryExists("c:\\temp")) { return 3; } return 5; } }
Now, we want to create two unit tests to cover all the code in the MyMethod() method. Here’s an attempt at one unit test:
[TestMethod] public void test_temp_directory_exists() { var myObject = new MyClass(); Assert.AreEqual(3, myObject.MyMethod()); }
The problem with this unit test is that it will pass if your computer contains the c:\temp directory. If your computer doesn’t contain c:\temp, then it will always fail. If you’re using a continuous integration environment, you can’t control if the directory exists or not. To compound the problem you really need test both possibilities to get full test coverage of your method. Adding a unit test to cover the case where c:\temp to your test suite would guarantee that one test would pass and the other fail.
The newcomer to unit testing might think: “I could just add code to my unit tests to create or delete that directory before the test runs!” Except, that would be a unit test that modifies your machine. The behavior would destroy anything you have in your c:\temp directory if you happen to use that directory for something. Unit tests should not modify anything outside the unit test itself. A unit test should never modify database data. A unit test should not modify files on your system. You should avoid creating physical files if possible, even temp files because temp file usage will make your unit tests slower.
Unfortunately, you can’t just mock System.IO.Directory.Exists(). The way to get around this is to create a wrapper object, then inject the object into MyClass and then you can use Moq to mock your wrapper object to be used for unit testing only. Your program will not change, it will still call MyClass as before. Here’s the wrapper object and an interface to go with it:
public class FileSystem : IFileSystem { public bool DirectoryExists(string directoryName) { return System.IO.Directory.Exists(directoryName); } } public interface IFileSystem { bool DirectoryExists(string directoryName); }
Your next step is to provide an injection point into your existing class (MyClass). You can do this by creating two constructors, the default constructor that initializes this object for use by your method and a constructor that expects a parameter of IFileSystem. The constructor with the IFileSystem parameter will only be used by your unit test. That is where you will pass along a mocked version of your filesystem object with known return values. Here are the modifications to the MyClass object:
public class MyClass { private readonly IFileSystem _fileSystem; public MyClass(IFileSystem fileSystem) { _fileSystem = fileSystem; } public MyClass() { _fileSystem = new FileSystem(); } public int MyMethod() { if (_fileSystem.DirectoryExists("c:\\temp")) { return 3; } return 5; } }
This is the point where your program should operate as normal. Notice how I did not need to modify the original call to MyClass that occurred at the “Main()” of the program. The MyClass() object will create a IFileSystem wrapper instance and use that object instead of calling System.IO.Directory.Exists(). The result will be the same. The difference is that now, you can create two unit tests with mocked versions of IFileSystem in order to test both possible outcomes of the existence of “c:\temp”. Here is an example of the two unit tests:
[TestMethod] public void test_temp_directory_exists() { var mockFileSystem = new Mock<IFileSystem>(); mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true); var myObject = new MyClass(mockFileSystem.Object); Assert.AreEqual(3, myObject.MyMethod()); } [TestMethod] public void test_temp_directory_missing() { var mockFileSystem = new Mock<IFileSystem>(); mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false); var myObject = new MyClass(mockFileSystem.Object); Assert.AreEqual(5, myObject.MyMethod()); }
Make sure you include the NuGet package for Moq. You’ll notice that in the first unit test, we’re testing MyClass with a mocked up version of a system where “c:\temp” exists. In the second unit test, the mock returns false for the directory exists check.
One thing to note: You must provide a matching input on x.DirectoryExists() in the mock setup. If it doesn’t match what is used in the method, then you will not get the results you expect. In this example, the directory being checked is hard-coded in the method and we know that it is “c:\temp”, so that’s how I mocked it. If there is a parameter that is passed into the method, then you can mock some test value, and pass the same test value into your method to make sure it matches (the actual test parameter doesn’t matter for the unit test, only the results).
Using an IOC Container
This sample is setup to be extremely simple. I’m assuming that you have existing .Net legacy code and you’re attempting to add unit tests to the code. Normally, legacy code is hopelessly un-unit testable. In other words, it’s usually not worth the effort to apply unit tests because of the tightly coupled nature of legacy code. There are situations where legacy code is not too difficult to add unit testing. This can occur if the code is relatively new and the developer(s) took some care in how they built the code. If you are building new code, you can use this same technique from the beginning, but you should also plan your entire project to use an IOC container. I would not recommend refactoring an existing project to use an IOC container. That is a level of madness that I have attempted more than once with many man-hours of wasted time trying to figure out what is wrong with the scoping of my objects.
If your code is relatively new and you have refactored to use contructors as your injection points, you might be able to adapt to an IOC container. If you are building your code from the ground up, you need to use an IOC container. Do it now and save yourself the headache of trying to figure out how to inject objects three levels deep. What am I talking about? Here’s an example of a program that is tightly coupled:
class Program { static void Main(string[] args) { var myRootClass = new MyRootClass(); myRootClass.Increment(); Console.WriteLine(myRootClass.CountExceeded()); Console.ReadKey(); } }
public class MyRootClass { readonly ChildClass _childClass = new ChildClass(); public bool CountExceeded() { if (_childClass.TotalNumbers() > 5) { return true; } return false; } public void Increment() { _childClass.IncrementIfTempDirectoryExists(); } } public class ChildClass { private int _myNumber; public int TotalNumbers() { return _myNumber; } public void IncrementIfTempDirectoryExists() { if (System.IO.Directory.Exists("c:\\temp")) { _myNumber++; } } public void Clear() { _myNumber = 0; } }
The example code above is very typical legacy code. The “Main()” calls the first object called “MyRootClass()”, then that object calls a child class that uses System.IO.Directory.Exists(). You can use the previous example to unit test the ChildClass for examples when c:\temp exist and when it doesn’t exist. When you start to unit test MyRootClass, there’s a nasty surprise. How to you inject your directory wrapper into that class? If you have to inject class wrappers and mocked classes of every child class of a class, the constructor of a class could become incredibly large. This is where IOC containers come to the rescue.
As I’ve explained in other blog posts, an IOC container is like a dictionary of your objects. When you create your objects, you must create a matching interface for the object. The index of the IOC dictionary is the interface name that represents your object. Then you only call other objects using the interface as your data type and ask the IOC container for the object that is in the dictionary. I’m going to make up a simple IOC container object just for demonstration purposes. Do not use this for your code, use something like AutoFac for your IOC container. This sample is just to show the concept of how it all works. Here’s the container object:
public class IOCContainer { private static readonly Dictionary<string,object> ClassList = new Dictionary<string, object>(); private static IOCContainer _instance; public static IOCContainer Instance => _instance ?? (_instance = new IOCContainer()); public void AddObject<T>(string interfaceName, T theObject) { ClassList.Add(interfaceName,theObject); } public object GetObject(string interfaceName) { return ClassList[interfaceName]; } public void Clear() { ClassList.Clear(); } }
This object is a singleton object (global object) so that it can be used by any object in your project/solution. Basically it’s a container that holds all pointers to your object instances. This is a very simple example, so I’m going to ignore scoping for now. I’m going to assume that all your objects contain no special dependent initialization code. In a real-world example, you’ll have to analyze what is initialized when your objects are created and determine how to setup the scoping in the IOC container. AutoFac has options of when the object will be created. This example creates all the objects before the program starts to execute. There are many reasons why you might not want to create an object until it’s actually used. Keep that in mind when you are looking at this simple example program.
In order to use the above container, we’ll need to use the same FileSystem object and interface from the prevous program. Then create an interface for MyRootObject and ChildObject. Next, you’ll need to go through your program and find every location where an object is instantiated (look for the “new” command). Replace those instances like this:
public class ChildClass : IChildClass { private int _myNumber; private readonly IFileSystem _fileSystem = (IFileSystem)IOCContainer.Instance.GetObject("IFileSystem"); public int TotalNumbers() { return _myNumber; } public void IncrementIfTempDirectoryExists() { if (_fileSystem.DirectoryExists("c:\\temp")) { _myNumber++; } } public void Clear() { _myNumber = 0; } }
Instead of creating a new instance of FileSystem, you’ll ask the IOC container to give you the instance that was created for the interface called IFileSystem. Notice how there is no injection in this object. AutoFac and other IOC containers have facilities to perform constructor injection automatically. I don’t want to introduce that level of complexity in this example, so for now I’ll just pretend that we need to go to the IOC container object directly for the main program as well as the unit tests. You should be able to see the pattern from this example.
Once all your classes are updated to use the IOC container, you’ll need to change your “Main()” to setup the container. I changed the Main() method like this:
static void Main(string[] args) { ContainerSetup(); var myRootClass = (IMyRootClass)IOCContainer.Instance.GetObject("IMyRootClass"); myRootClass.Increment(); Console.WriteLine(myRootClass.CountExceeded()); Console.ReadKey(); } private static void ContainerSetup() { IOCContainer.Instance.AddObject<IChildClass>("IChildClass",new ChildClass()); IOCContainer.Instance.AddObject<IMyRootClass>("IMyRootClass",new MyRootClass()); IOCContainer.Instance.AddObject<IFileSystem>("IFileSystem", new FileSystem()); }
Technically the MyRootClass object does not need to be included in the IOC container since no other object is dependent on it. I included it to demonstrate that all objects should be inserted into the IOC container and referenced from the instance in the container. This is the design pattern used by IOC containers. Now we can write the following unit tests:
[TestMethod] public void test_temp_directory_exists() { var mockFileSystem = new Mock<IFileSystem>(); mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true); IOCContainer.Instance.Clear(); IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object); var myObject = new ChildClass(); myObject.IncrementIfTempDirectoryExists(); Assert.AreEqual(1, myObject.TotalNumbers()); } [TestMethod] public void test_temp_directory_missing() { var mockFileSystem = new Mock<IFileSystem>(); mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false); IOCContainer.Instance.Clear(); IOCContainer.Instance.AddObject("IFileSystem", mockFileSystem.Object); var myObject = new ChildClass(); myObject.IncrementIfTempDirectoryExists(); Assert.AreEqual(0, myObject.TotalNumbers()); } [TestMethod] public void test_root_count_exceeded_true() { var mockChildClass = new Mock<IChildClass>(); mockChildClass.Setup(x => x.TotalNumbers()).Returns(12); IOCContainer.Instance.Clear(); IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object); var myObject = new MyRootClass(); myObject.Increment(); Assert.AreEqual(true,myObject.CountExceeded()); } [TestMethod] public void test_root_count_exceeded_false() { var mockChildClass = new Mock<IChildClass>(); mockChildClass.Setup(x => x.TotalNumbers()).Returns(1); IOCContainer.Instance.Clear(); IOCContainer.Instance.AddObject("IChildClass", mockChildClass.Object); var myObject = new MyRootClass(); myObject.Increment(); Assert.AreEqual(false, myObject.CountExceeded()); }
In these unit tests, we put the mocked up object used by the object under test into the IOC container. I have provided a “Clear()” method to reset the IOC container for the next test. When you use AutoFac or other IOC containers, you will not need the container object in your unit tests. That’s because IOC containers like the one built into .Net Core and AutoFac use the constructor of the object to perform injection automatically. That makes your unit tests easier because you just use the constructor to inject your mocked up object and test your object. Your program uses the IOC container to magically inject the correct object according to the interface used by your constructor.
Using AutoFac
Take the previous example and create a new constructor for each class and pass the interface as a parameter into the object like this:
private readonly IFileSystem _fileSystem; public ChildClass(IFileSystem fileSystem) { _fileSystem = fileSystem; }
Instead of asking the IOC container for the object that matches the interface IFileSystem, I have only setup the object to expect the fileSystem object to be passed in as a parameter to the class constructor. Make this change for each class in your project. Next, change your main program to include AutoFac (NuGet package) and refactor your IOC container setup to look like this:
static void Main(string[] args) { IOCContainer.Setup(); using (var myLifetime = IOCContainer.Container.BeginLifetimeScope()) { var myRootClass = myLifetime.Resolve<IMyRootClass>(); myRootClass.Increment(); Console.WriteLine(myRootClass.CountExceeded()); Console.ReadKey(); } } public static class IOCContainer { public static IContainer Container { get; set; } public static void Setup() { var builder = new ContainerBuilder(); builder.Register(x => new FileSystem()) .As<IFileSystem>() .PropertiesAutowired() .SingleInstance(); builder.Register(x => new ChildClass(x.Resolve<IFileSystem>())) .As<IChildClass>() .PropertiesAutowired() .SingleInstance(); builder.Register(x => new MyRootClass(x.Resolve<IChildClass>())) .As<IMyRootClass>() .PropertiesAutowired() .SingleInstance(); Container = builder.Build(); } }
I have ordered the builder.Register command from innner most to the outer most object classes. This is not really necessary since the resolve will not occur until the IOC container is called by the object to be used. In other words, you can define the MyRootClass first, followed by FileSystem and ChildClass, or in any order you want. The Register command is just storing your definition of which physical object will be represented by each interface and which dependencies it will depend on.
Now you can cleanup your unit tests to look like this:
[TestMethod] public void test_temp_directory_exists() { var mockFileSystem = new Mock<IFileSystem>(); mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(true); var myObject = new ChildClass(mockFileSystem.Object); myObject.IncrementIfTempDirectoryExists(); Assert.AreEqual(1, myObject.TotalNumbers()); } [TestMethod] public void test_temp_directory_missing() { var mockFileSystem = new Mock<IFileSystem>(); mockFileSystem.Setup(x => x.DirectoryExists("c:\\temp")).Returns(false); var myObject = new ChildClass(mockFileSystem.Object); myObject.IncrementIfTempDirectoryExists(); Assert.AreEqual(0, myObject.TotalNumbers()); } [TestMethod] public void test_root_count_exceeded_true() { var mockChildClass = new Mock<IChildClass>(); mockChildClass.Setup(x => x.TotalNumbers()).Returns(12); var myObject = new MyRootClass(mockChildClass.Object); myObject.Increment(); Assert.AreEqual(true, myObject.CountExceeded()); } [TestMethod] public void test_root_count_exceeded_false() { var mockChildClass = new Mock<IChildClass>(); mockChildClass.Setup(x => x.TotalNumbers()).Returns(1); var myObject = new MyRootClass(mockChildClass.Object); myObject.Increment(); Assert.AreEqual(false, myObject.CountExceeded()); }
Do not include the AutoFac NuGet package in your unit test project. It’s not needed. Each object is isolated from all other objects. You will still need to mock any injected objects, but the injection occurs at the constructor of each object. All dependencies have been isolated so you can unit test with ease.
Where to Get the Code
As always, I have posted the sample code up on my GitHub account. This project contains four different sample projects. I would encourage you to download each sample and experiment/practice with them. You can download the samples by following the links listed here:
- MockingFileSystem
- TightlyCoupledExample
- SimpleIOCContainer
- AutoFacIOCContainer | http://blog.frankdecaire.com/2017/05/28/mocking-your-file-system/ | CC-MAIN-2018-05 | en | refinedweb |
Teppo works for a Finnish company that, among other things, develops a few mobile applications. This company is growing, and as growing companies do, it recently purchased another company.
One of the applications that came with this company had a mongrel past. It started as an in-house project, was shipped off to a vague bunch of contractors in Serbia with no known address, then back to an intern, before being left to grow wild with anyone who had a few minutes trying to fix it.
The resulting code logs in a mixture of Serbian and Finnish. Paths and IP addresses are hard-coded in, and mostly point to third party services that have long since stopped working. It has an internal ad-framework that doesn’t work. The Git repository has dozens of branches, with no indication which one actually builds the production versions of the application. The back-end server runs a
cron script containing lines like this:
* * * * * curl > ~/out.txt * * * * * echo 'lalala' > ~/out1.txt
It’s a terrible application that doesn’t even “barely” work. The real test, of course, for an unsupportable mess of an application is this: how does it handle dates?
public static String getSratdate_time_date(String date) { String dtStart = date; try { SimpleDateFormat format = new SimpleDateFormat( "yyyy-MM-dd HH:mm:ss"); Date deals_date = format.parse(dtStart); String intMonth = (String) android.text.format.DateFormat.format( "M", deals_date); // Jan String year = (String) android.text.format.DateFormat.format( "yy", deals_date); // 2013 String day = (String) android.text.format.DateFormat.format( "dd", deals_date); // 20 return (intMonth + " / " + day); } catch (Exception e) { e.printStackTrace(); } return ""; }
This takes a string containing a date and converts it into a string containing “M/dd”. You may note, I used a date format string to describe what this code does, since the easiest way to write this might have been to do something like…
DateFormat.format("M/dd", deals_date), which doesn’t seem to be that much of a leap, since they used the
DateFormat object.
Bonus points for using Hungarian notation, and triple that bonus for using it wrong.
| http://thedailywtf.com/articles/a-dated-inheritance | CC-MAIN-2018-05 | en | refinedweb |
This article is a very quick and dirty draft of the idea that was discussed some time ago on comp.lang.c++.moderated and concerning the possibility to add lambda to C++. There are many different approaches to this subject and this article is not a formal language feature proposal - rather a speculation on how the problem can be solved without abandoning the current language culture.
Considering the fact that there is no such thing as lambda in C++ (as defined by the '98 standard), people have different definitions of lambda - much of the heat in discussions comes from this single fact. What is discussed in this article has a broad context and is not limited to functions only - thus, as was pointed out, the term "lambda" could be confusing. Therefore, here's my own definition of an "anonymous entity", covering also lambda functions:
An anonymous entity is the unnamed definition of some language entity, written at the place where it is used.
The language entity can be a type, function, procedure, object, etc. In particular, the anonymous function is meant to be more or less equivalent to what is known as lambda functions in other languages.
In C++, many things already can exist as anonymous entities, defined exactly where they are used, as seen in the following examples:
// temporary, unnamed object throw MyClass(); // object of unnamed struct struct { int a, b; } s;
In both examples above, the language entity (object or type, respectively) could be defined with explicit name and used by that name as well:
MyClass c; throw c; struct S { int a, b; }; S s;
In these examples, using anonymous entity instead of a named one brings the following benefits (not all of them are directly visible above, but relevant examples are easy to write):
The existing possibility of using anonymous entities is not really orthogonal. For example, the temporary object cannot be bound to non-const reference, whereas the named, non-const object can. The unnamed struct cannot be used, for example, to define function's return or parameter type, etc. This, however, does not change the concept in general.
The concept of anonymous entity in the language would be fully orthogonal, if the unnamed entity (defined in-place) could be used everywhere where the named entity is allowed. I do not claim that it is fully possible in C++, but I can imagine languages that have this property.
The proposed addition is not supposed to be fully orthogonal, but rather somehow distorted with practical purposes in mind. Two kinds of anonymous entities are proposed:
Anonymous function is an unnamed function, defined at the place where a pointer to function is expected or accepted. It has the same syntax as a regular function, but has no name. From the implementation point of view, it should be replaced by the definition of a free function (as if it was defined in an unnamed namespace) with compiler-generated unique name, and that name should be used where the anonymous function itself was used.
Examples:
atexit( void () { puts("Good bye!"); } );
This should be replaced by:
namespace // unnamed { void __some_unique_name() { puts("Good bye!"); } } // and later: atexit(__some_unique_name);
Another example:
std::transform(b1, e1, b2, int (int x) { return x + 1; });
More with STL:
std::for_each(b, e, void (int x) { std::cout << x << ' '; });
And more:
std::sort(b, e, bool (int a, int b) { return a < b; });
The anonymous function has the signature, which is a signature of the function that results from adding unique name to the rest of the code.
Anonymous function can use all names of types that are available (or
typedefed) where it is written. This allows to use anonymous function within generic functions, where some type names may exist as bound type variables that will be fixed when the containing template function is instantiated. Example:
template <typename T> void fun(std::vector<T> const &v) { std::for_each(v.begin(), v.end(), void () (T const &x) { std::cout << x << ' '; } ); }
This should be replaced by:
namespace // unnamed { template <typename T> void __some_unique_name(T const &x) { cout << x << ' '; } } // and later: template <typename T> void fun(std::vector<T> const &v) { std::for_each(v.begin(), v.end(), __some_unique_name<T>); }
Note the
<T> at the end. It is important that this
T is the same
T as appears in the anonymous function. If there are more types used this way, then the anonymous function should be replaced by the free template function with appropriate number of template parameters.
Performance issue: As was pointed out in the public discussion, rewriting the anonymous function so that it results in a pointer to function may pose performance penalty due to the fact that calls through pointer to function are not likely to be inlined. In order to solve this problem, the anonymous function could have the additional variant, like in:
std::sort(b, e, inline bool (int a, int b) { return a < b; });
(note the inline keyword)
Such anonymous function could be rewritten not as a free function, but as an instance of a class with relevant function call operator, if the context allows to use generic functors instead of requiring pointer to functions only. This is the case with std::sort and such functor class could be used and be a likely candidate for inlining. If this rewrite is not legal (for example, when only pointer to function is accepted, like with std::qsort or any other C function), then normal function with pointer to function should be used.
The alternative solution to this performance problem could be to always rewrite anonymous function to the instance of a class with relevant function call operator (possibly delegating to the static function that actually does the job) and that in addition has a cast operator to the pointer to this function, like here (this is a rewrite for the last example above):
namespace // unnamed { struct __unique_name { static bool __invoke(int a, int b) { return a < b; } // for use as a functor bool operator()(int a, int b) { return __invoke(a, b); } // for use via pointer to function typedef bool PF(int, int); operator PF() { return &__unique_name::__invoke; } }; } // and later: std::sort(b, e, __unique_name());
Above, the std::sort uses the default-initialized instance of the functor class, calling its operator(). This is likely to be inlined. On the other hand, if the same anonymous function was used in the context where only a pointer to function is accepted, then it would be automatically cast to the pointer to the static __invoke function.
Known problem: anonymous function cannot be recursive. This is because there is no name available that could be used to call it again. Some special support is needed for this, or we just agree that if some function needs to be called recursively (by name!) then it should not be unnamed.
There is also a very interesting subject of allowing access to the local variables existing in the scope where the anonymous function is used. Consider:
void foo() { string message = "Good bye!"; atexit( void () { cout << message; } ); }
Above, the unnamed function defined in-place as a parameter to
atexit is supposed to have access to the variable
message that was declared in the enclosing scope. There are languages that support so-called local functions with various solutions to the problem of dangling references (the dangling reference can be created when the local function is used in the place or at the time when the given variable no longer exists) - these range from relying on the garbage collector and keeping objects alive as long as they can be possibly used, to binding the lifetime of functions with that of the scope.
The above example is dangerous, because at the time the unnamed function is called (sometime at the end of the program), the
message variable no longer exists and the unnamed function would then refer to non-existent object. That would need to be classified as undefined behaviour and this possibility alone might be the reason to abandon this idea altogether.
Consider, however, the following variant of one of the earlier examples:
void foo() { string separator = " "; std::for_each(b, e, void (int x) { std::cout << x << separator; }); }
Above, the unnamed local function that is used by
for_each accesses the
separator variable that is guaranteed to exist longer than the unnamed function is in use itself. The above example is therefore perfectly safe and does not lead to undefined behaviour - no dangling references are created here.
The real difference between these two examples is in the relation between the time when the function is used and the lifetime of the scope where it was defined. In the first case the unnamed local function is used when the scope where it was declared is already left (and the referenced variable is already destroyed). In the second case the function is used only within the scope where it was defined (so that all referenced variables still exist).
The big question here is whether these two examples should be distinguished at the language level and whether the dangerous variant should be detected at compile-time. In general, that does not seem to be possible. The possible solutions can range from those involving garbage collector (the "Java approach") or more elaborate type system for function pointers (the "Ada approach"). Relying on garbage collector will work only for reference-oriented types, because the relevant objects can then be kept alive as long as necessary - it will not, however, work for fundamental types and those objects which have automatic storage (which are created on the stack). In order to keep the existing language culture the only solutions in C++ would be to either:
Similarly to anonymous function, the anonymous class is an unnamed class definition written at the place where the name of an already defined class is required or accepted. It should be replaced by the definition of a class (as if it was defined in an unnamed namespace) with compiler-generated unique name, and that name should be used where the unnamed class itself was used.
Examples:
int main() { struct { int a, b; } s; }
It should be replaced by:
namespace // unnamed { struct __unique_name { int a, b; }; } int main() { __unique_name s; }
More:
struct { void foo() { puts("foo"); } }().foo();
(note () after anonymous class - this creates a temporary object of the unnamed type)
It should be replaced by:
namespace // unnamed { struct __unique_name { void foo() { puts("foo"); } }; } // and later: __unique_name().foo();
Something with STL:
std::sort(b, e, struct { bool operator()(int a, int b) { return a < b; } }() );
(again note () after anonymous class)
Some fun with templates and overloading:
foo( struct { template <typename T> void bar(T t) { /* ... */ } void bar(int i) { /* ... */ } }() );
(again note () after anonymous class)
And why not:
class A { // ... struct { int a, b; } pair; };
or even:
struct base { virtual string what() const = 0; }; // and later: try { throw struct : base { string what() const { return "Oops!"; } }(); } catch (base const &e) { std::cerr << e.what() << std::endl; }
(again note () after anonymous class)
Known problem: it is not possible to use the name of unnamed class in its own definition (this is needed for constructors and destructors). Some special support is needed for this or we just agree that classes with constructors and destructors are not good candidates for anonymous classes.
Unknown problems: it is very likely that the anonymous class does not make much sense in every context where the name of a class could appear.
Note: some of the above examples of anonymous class are already legal in C++. This means that the concept is not really new - it just needs to be consistently extended.
The advantages of the above informal proposal are:
The above points should be also the test-questions for any formal proposal in this area.
Did you find this article interesting? Share it! | http://www.inspirel.com/articles/Possible_Syntax_For_Cpp_Lambda.html | CC-MAIN-2018-05 | en | refinedweb |
A
Join the conversationAdd Comment
Hi,
I believe your link to the blog post you are refering did not be published
" I've attached the sample to this blog post."
It's there but it's an attachment. not a link. When I look at the post, it shows up as a folder icon on the lower left hand corner of the post.
Brian
Is this sample for TFS 2008 or TFS 2010 or both?
Sorry, I should have said. It's TFS 2010.
Brian
Stephen,
Looking at the code in the project this appears to only work with TFS 2010. You could potentially take the code and re-engineer it for TFS 2008.
Worked fine for me with TFS 2008
Nice work!! Very helpful. Thank you!
I've developed a VS Extension that generate graph dependency of TFS Groups and Membership. TFS Membership Visualizer is available in VS Extension Manager or here :
visualstudiogallery.msdn.microsoft.com/…/582dd43e-e8be-48fc-9763-bf13bac66cc2
As we indeed needed this functionality, I gave it a try, but unfortunately I'm getting an XmlException "hexadecimal value 0x1F, is an invalid character. Line 1, position 329393."
I can't believe you have to write code to do something as simple as get a list of users.
Thank you very much.
One observation was, the int variable "batchNum" was not set to increment, that could cause an error like below.
Unhandled Exception: System.ArgumentException: An item with the same key has already been added.
Just adding batchNum++; before the closing statement of while fixes it.
Thanks again
Brian, the program works for defaultcollection, but blows up with "KeyNotFoundException" when the collection to be queries is not defaultcollection. Could you have your developer update it? This would be very useful for user recertification purposes.
Brain, Is it valid for TFS 2015 as well?
@Aslam, I haven’t tried it in a long time. I suspect it will work because we generally work hard to keep our APIs compatible over time. I’d suggest you try it and see.
Brian
I am trying to get a list of users and which projects they are in so we can migrate them to another server.
I tried the command line tool, but it seems to no longer be installed. I then tried tfs admin tool and sidekicks, but even though they may list the users, I can not get an export of list I can put in a spreadsheet or document. I tried this program but cannot compile it.
I am missing the TeamFoundation namespace. it seems this was phased out and is no longer available with my VS 2015 install.
So how can we get lists of users, projects, whatever now?
@Steve B – if I’m understanding what you are after here, the simplest thing is probably to use tfssecurity.exe. Perhaps that is the command-line tool you were referencing? It should still be installed by both TFS (under the Tools directory) and by Visual Studio (for VS 2017, I think it’ll be under a Team Explorer directory).
Here’s a sample command line that will dump out every valid user for the entire server:
TFSSecurity.exe /imx “[TEAM FOUNDATION]\Team Foundation Valid Users” /server:
If you want to go collection by collection, you can do the same thing with something like:
TFSSecurity.exe /imx “[collection name]\Project Collection Valid Users” /collection:
…and for a team project:
TFSSecurity.exe /imx “[project name]\Project Valid Users” /collection:
Hope that helps. | https://blogs.msdn.microsoft.com/bharry/2010/10/01/dumping-the-contents-of-all-of-your-tfs-groups/ | CC-MAIN-2018-05 | en | refinedweb |
Ooh, very very bad OOP practice. You definitely dont want other classes messing around with your member variables. This is what OOP is all about - data hiding and the like. And those get*() and set*() functions really grow on ya (not to mention that they keep everything looking clean).
There's really no good reason to make variables in a C++ class public1. If you can't understand why, then you simply don't understand the theory behind OOP.
It's the idea of separating the interface from the implementation. By allowing outside code to access the class's variables, you're tying the use of that class to that specific data representation.
For example, imagine that you write a Stack class (rather pointless, unless you WANT to duplicate the functionality of the STL, but anyway...), and you implement it with an array. You document that the correct way to access the top of the stack is to read the value of stack.array[stack_top]. But later on you discover you need the stack to be able to grow and shrink at will. The best way to do this, of course, is to use a linked list. But you can't use a linked list, because all the code that uses your class directly reads your class's private members, so you can't change the representation of your data!
Not to mention that using accessor functions (the get_*() and set_*()) allows you to perform sanity checks, for things like overflow, underflow, valid input, etc.
1Constants are a different story, especially static ones.
While I was rambling, it seems that others have responded first... But at least I gave a different example of why it's useful -- there are a lot of reasons...
You make data members private to preserve consistency of state. This matters. It matters a lot. You also do it to allow the implementation of a subclass to differ drastically from that of the base class: This is called "abstraction", and it's a big part of the reason why we bother doing OOP at all. So let's take those two in order:
Say for example you've got a class which represents a list of strings. You've got an interface for that class: A count of the number of strings you've got, a number which says how much memory you've got allocated1, and you've got a pointer to the storage: We'll assume for the sake of simplicity that it's an array of pointers to your favorite string class.
Okay. Take for example that number which tracks the size of your array. Assume that some genius comes by and changes it without reallocating (this could be you two weeks later, or even at 3:00 AM the following morning). Okay, you crash. But not always. Bugs like that are a bitch to track down, because you won't crash consistently. Sometimes it'll be a problem, sometimes not; you may get away with using the next few bytes off the end of your array, but when you call realloc() they won't get dragged along with the rest. It's a mess. Don't go there.
The same goes for the data member tracking the number of strings you've got: If it says "five", and you change it to "six", your code will eventually try to call member functions on an object that doesn't exist: p5->size(), for example. You're in trouble again.
In inline function in C++ incurs no overhead, unless it's a virtual (but if you need it to be virtual, you need a function anyway, so you're stuck). Use them.
Of course, there are some public data members which can safely be changed by the "user" -- but which is which? Here's a way to make it easy: If the "user" shouldn't touch it, hide it.
Okay. Now let's assume that you've got code here and there which uses your string list class. You've got other lists of strings which you might like to interact with also, but they can't be stored in that list thing: Say you've got a GUI widget such as a list box. Well, it's got a list of strings, hasn't it? How many sets of functions do you want to write to do the same thing to different classes? My answer is, "one". All lists of strings should have the same interface if at all possible. It's less code to write, less hassle to deal with.
So. Let's say your string list class has an abstract base class (no data members at all, just a pure functional interface, without even bodies for most of the virtual functions (In C++ a bodiless virtual function is called a "pure virtual function") ). If that's the case, you can make one subclass which contains the above-described list of pointers to string objects, and another subclass which provides an interface to the guts of a list box widget. You can interact with both in exactly the same way, using the same code: Just declare a pointer to the abstract base class. Java provides a similar and somewhat simpler feature called "interfaces"2.
The above list/widget example is taken from Borland's VCL library, shipped with their Delphi and C++ Builder products. It's a neat idea. It's the first example that sprang to mind, but there are dozens of others.
In a nutshell: If you don't know why you should use abstraction, you don't know object oriented programming and you don't know C++. This is generally true of all C++ features: If you can't see why on earth anybody would bother, trust me, you're missing something important. This is the same as making the transition to C from an unstructured "goto language" like Basic: Functional decomposition seems kinda silly at first. It really looks like gratuitous rigamarole until you begin to grok. Then grokking happens, and you wonder how you could ever have done it any other way. So it is with OOP.
I recommend learning: It's enormously fun stuff.
Let me start by saying that all of my comments apply only to application level code. There is never an excuse any more to not follow standard encapsulation practice when writing library code.
This is one of those things where a little more effort up front pays off big later. Unfortunately,
there are fairly strong incentives to not follow good practice, and you've tripped on one of the biggest.
At one level, code is often easier to read when you don't abstract away the representation. This is because you remove a level of indirection and bring the code one step closer to the hardware. Not having to wade through a sea of member functions to understand a module you are debugging is sometimes nice.
Plus, it's faster to write. Hiding the structure of an object forces a much different style of programming that--there is no way around it--takes longer than the equivalent functionality without the abstraction.
But, you pay for it later. Anything you write that is truly useful is going to provide functionality that you eventually want to either automate or embed in other applications.
If you've followed good OOPy1 procedure and hidden away the representation of your application, your user interface, and your state then it will be relatively easy to adapt your code to its new environment. You won't have to worry that today a FrobNazz is a string, but later needs to be a map<String, NetworkObject*>. You won't have to wonder which security critical function is going to try to read through a null pointer after the change.
map<String, NetworkObject*>
On the other hand, if you've gone the easy way, you get to essentially rewrite the whole thing every time you need to bring it up on a new architecture, localize it to a new language (human language, not programming language), or embed it into another new system.
1- Referred to as gettin' oopy with your app
It should be noted that some progamming languages have managed to solve this ugly little issue. I'll use Ruby for the example. The solution entails:
class Foo
def initialize
@bar = nil # Create a instance variable
end
def bar
@bar # This is the accessor
end
def bar=(x)
@bar = x # This is the writer
end
end
foo = Foo.new
foo.bar # returns nil
foo.bar = 'blah' # calls "Foo#bar=", returns "blah"
foo.bar # returns "blah"
# This shortens to, for simplification when you need ONLY setting/getting...
class Foo
def initialize
@bar = nil
end
attr :bar # creates "Foo#bar" and "Foo#bar=" methods automatically
end
Undoubtably always a bad idea -- EVERYBODY on this node says so.
Now all that remains is for the snow to begin to explain STL. It has the structs:
struct
std::pair<T1,T2>, with public members first, second.
std::pair<T1,T2>
first, second
std::map
std::multimap
operator[] method).
The often overlooked benefit in data hiding or encapsulation is that objects are a whole lot easier to debug than are big gloppy strands of spaghetti code referring to the member variables of hundreds of classes.
For example if you know that only way a variable can change is by way of an accessor function, then you can set a breakpoint at that function. Each time the breakpoint is taken it's a simple matter of examining the call stack and finding out why the variable was being changed.
One of the problems I run into a lot these days is people who could think in an object oriented fashion. If a developer can not figure out a reason to hide the data, then it's probably not a good class of candidate, period.
Don't make a class out a group of related variables just because you can! It's just a structure or a map or a collection. It's not an object, ok?
Objects handle all of their own member manipulation. This includes reading them from a database, writing them to a database, displaying them in a dialog box, performing complex calculations involving the data, etc.
Contrary to popular belief, it isn't always a bad idea to have public member variables -- there's no value in forcing your implementation details to go through get/set pairs or any such nonsense.
For instance, consider a linked list class. A linked list is made up of nodes. A typical linked list class will have a private node type. Something along the lines of:
// a list of Elems
template <typename Elem>
class List
{
struct Node
{
Node* next;
Node* previous;
Elem value;
// then some implementation for the node, to
// handle insertions and deletions.
};
// and then the rest of the List's implementation.
};
Requiring the use of accessors for the various parts of the node is of no value whatsoever. The only people who can access the Node class are the people writing the List class -- and if you can't trust them to use the class properly, you can't trust anyone, and might as well not bother. You don't expose your Nodes to any users of the class, so it doesn't matter to them how it's implemented -- after all, the point of using OO is to keep these details hidden from the user.
Implementation details such as these nodes can use public member variables without fear -- the reasons for keeping them private are no longer valid.
It is occasionally the case where it is sound to have "exposed" public members. An example is the std::pair class template. The pair is just a poor man's tuple. It implies no policy, it's purely a means of pairing two types for ease of manipulation. This use is certainly rare -- I can't think of any reason to do so other than tuples -- but should not be ignored. Languages with built-in tuple support (found in most or all functional languages, including the C-like Vault) make the components of the tuple public -- it's the right decision.
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/using+public+variables+in+c%252B%252B+classes | CC-MAIN-2018-05 | en | refinedweb |
The code below is used to connect nodes in a list in a way that each node's next attribute will point to the following node in the list, but the function never terminates. What's wrong with my code?
def connect_level(alist): def aux(h, t): if len(t) == 0: h.next = None return else: h, t = alist[0], alist[1:] h.next = t[0] aux(h, list(t[1:])) if len(alist) == 0: return else: h, t = alist[0], alist[1:] aux(h, t) # this caused an error connect_level([TreeNode(1), TreeNode(2), TreeNode(3)])
Can you add some comments explaining what's supposed to be happening? I don't use python, but I can see that when aux calls itself h never changes. How does this relate to the problem of connecting nodes in a tree?
My solution is to solve the problem level by level, and what this function does is that given a list of nodes in the same level, connect each node to its right neighbor. The aux is a recursive function, which accepts two parameters, head( a single node) and tail (rest part of the list). The terminate condition for the recursion is when the tail is empty. The second parameter's size is reducing by one each time you make a recursive call.
Ah, ok. That method violates the problem spec by using log(n) space rather than constant space, but that won't stop it from connecting the nodes.
I actually meant add comments in the code, but I guess just saying "comments" was vague. Oops.
I still don't understand what the aux is for, how how it's meant to work. If I were going to connect nodes from a list I would do something like this (sorry if it's not valid python):
def connect_level(alist):
for i in range( 0, alist.length-2 ):
alist[i].next = alist[i+1]
Note that setting alist[n-1].next = None is unnecessary because all of the next pointers start out as None.
Yes, that's a much simpler implementation. :-) I guess I just want to practice how to write recursive function in Python.
It never terminates because in the aux function, the else block sets both h and t from alist.
Every time it gets to that block it recurses with h=alist[0], t=alist[2:].
Any time aux is called with alist.length>2, it will infinite loop.
For the condition len(t)==0 to be met, it would have to recurse with h and t set from the previous h and t rather than from alist.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/2128/why-this-recursive-function-never-terminates | CC-MAIN-2018-05 | en | refinedweb |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
def integerBreak(self, n):
return ((n % 3 + 3 + (n%3 == 2))*3**(n/3-1)) if (n>3) else (n-1)
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/47268/a-line-python-solution | CC-MAIN-2018-05 | en | refinedweb |
Contents:
UNIX Manual Page Gateway
Mail Gateway
Relational Databases
Search/Index Gateway
Imagine a situation where you have an enormous amount of data stored in a format that is foreign to a typical web browser. And you need to find a way to present this information on the Web, as well as allowing potential users to search through the information. How would you accomplish such a task?
Many information providers on the Web find themselves in situations like this. Such a problem can be solved by writing a CGI program that acts as a gateway between the data and the Web. A simple gateway program was presented in Chapter 7, Advanced Form Applications. The pie graph program can read the ice cream data file and produce a graph illustrating the information contained within it. In this chapter, we will discuss gateways to UNIX programs, relational databases, and search engines.
Manual pages on a UNIX operating system provide documentation on the various software and utilities installed on the system. In this section, I will write a gateway that reads the requested manual page, converts it to HTML, and displays it (see Figure 9.1). We will let the standard utility for formatting manual pages, nroff, do most of the work. But this example is useful for showing what a little HTML can do to spruce up a document. The key technique you need is to examine the input expected by a program and the output that it generates, so that you can communicate with it.
Here is the form that is presented to the user:
<HTML> <HEAD><TITLE>UNIX Manual Page Gateway</TITLE></HEAD> <BODY> <H1>UNIX Manual Page Gateway</H1> <HR> <FORM ACTION="/cgi-bin/manpage.pl" METHOD="POST"> <EM>What manual page would you like to see?</EM> <BR> <INPUT TYPE="text" NAME="manpage" SIZE=40> <P> <EM>What section is that manual page located in?</EM> <BR> <SELECT NAME="section" SIZE=1> <OPTION SELECTED>1 <OPTION>2 <OPTION>3 <OPTION>4 <OPTION>5 <OPTION>6 <OPTION>7 <OPTION>8 <OPTION>Don't Know </SELECT> <P> <INPUT TYPE="submit" VALUE="Submit the form"> <INPUT TYPE="reset" VALUE="Clear all fields"> </FORM> <HR> </BODY></HTML>
This form will be rendered as shown in Figure 9.2.
On nearly all UNIX systems, manual pages are divided into eight or more sections (or subdirectories), located under one main directory-usually /usr/local/man or /usr/man. This form asks the user to provide the section number for the desired manual page.
The CGI program follows. The main program is devoted entirely to finding the right section, and the particular manual page. A subroutine invokes nroff on the page to handle the internal nroff codes that all manual pages are formatted in, then converts the nroff output to HTML.
#!/usr/local/bin/perl $webmaster = "Shishir Gundavaram (shishir\@bu\.edu)"; $script = $ENV{'SCRIPT_NAME'}; $man_path = "/usr/local/man"; $nroff = "/usr/bin/nroff -man";
The program assumes that the manual pages are stored in the /usr/local/man directory. The nroff utility formats the manual page according to the directives found within the document. A typical unformatted manual page looks like this:
.TH EMACS 1 "1994 April 19" .UC 4 .SH NAME emacs \- GNU project Emacs .SH SYNOPSIS .B emacs [ .I command-line switches ] [ .I files ... ] .br .SH DESCRIPTION .I GNU Emacs is a version of .I Emacs, written by the author of the original (PDP-10) .I Emacs, Richard Stallman. .br . . .
Once it is formatted by nroff, it looks like this:
EMACS(1) USER COMMANDS EMACS(1) NAME emacs - GNU project Emacs SYNOPSIS emacs [ command-line switches ] [ files ... ] DESCRIPTION GNU Emacs is a version of Emacs, written by the author of the original (PDP-10) Emacs, Richard Stallman. . . . Sun Release 4.1 Last change: 1994 April 19 1
Now, let's continue with the program to see how this information can be further formatted for display on a web browser.
$last_line = "Last change:";
The $last_line variable contains the text that is found on the last line of each page in a manual. This variable is used to remove that line when formatting for the Web.
&parse_form_data (*FORM); ($manpage = $FORM{'manpage'}) =~ s/^\s*(.*)\b\s*$/$1/; $section = $FORM{'section'};
The data in the form is parsed and stored. The parse_form_data subroutine is the one used initially in the last chapter. Leading and trailing spaces are removed from the information in the manpage field. The reason for doing this is so that the specified page can be found.
if ( (!$manpage) || ($manpage !~ /^[\w\+\-]+$/) ) { &return_error (500, "UNIX Manual Page Gateway Error", "Invalid manual page specification.");
This block is very important! If a manual page was not specified, or if the information contains characters other than (A-Z, a-z, 0-9, _, +, -), an error message is returned. As discussed in Chapter 7, Advanced Form Applications, it is always important to check for shell metacharacters for security reasons.
} else { if ($section !~ /^\d+$/) { $section = &find_section (); } else { $section = &check_section (); }
If the section field consists of a number, the check_section subroutine is called to check the specified section for the particular manual page. If non-numerical information was passed, such as "Don't Know," the find_section subroutine iterates through all of the sections to determine the appropriate one. In the regular expression, "\d" stands for digit, "+" allows for one or more of them, and the "^" and "$" ensure that nothing but digits are in the string. To simplify this part of the search, we do not allow the "nonstandard" subsections some systems offer, such as 2v or 3m.
Both of these search subroutines return values upon termination. These return values are used by the code below to make sure that there are no errors.
if ( ($section >= 1) && ($section <= 8) ) { &display_manpage (); } else { &return_error (500, "UNIX Manual Page Gateway Error", "Could not find the requested document."); } } exit (0);
The find_section and check_section subroutines called above return a value of zero (0) if the specified manual page does not exist. This return value is stored in the section variable. If the information contained in section is in the range of 1 through 8, the display_manpage subroutine is called to display the manual page. Otherwise, an error is returned.
The find_section subroutine searches for a particular manual page in all the sections (from 1 through 8).
sub find_section { local ($temp_section, $loop, $temp_dir, $temp_file); $temp_section = 0; for ($loop=1; $loop <= 8; $loop++) { $temp_dir = join("", $man_path, "/man", $loop); $temp_file = join("", $temp_dir, "/", $manpage, ".", $loop);
find_section searches in the subdirectories called "man1," "man2," "man3," etc. And each manual page in the subdirectory is suffixed with the section number, such as "zmore.1," and "emacs.1." Thus, the first pass through the loop might join "/usr/local/man" with "man1" and "zmore.1" to make "/usr/local/man/ man1/zmore.1", which is stored in the $temp_file variable.
if (-e $temp_file) { $temp_section = $loop; } }
The -e switch returns TRUE if the file exists. If the manual page is found, the temp_section variable contains the section number.
return ($temp_section); }
The subroutine returns the value stored in $temp_section. If the specified manual page is not found, it returns zero.
The check_section subroutine checks the specified section for the particular manual page. If it exists, the section number passed to the subroutine is returned. Otherwise, the subroutine returns zero to indicate failure. Remember that you may have to modify this program to reflect the directories and filenames of manual pages on your system.
sub check_section { local ($temp_section, $temp_file); $temp_section = 0; $temp_file = join ("", $man_path, "/man", $section, "/", $manpage, ".", $section); if (-e $temp_file) { $temp_section = $section; } return ($temp_section); }
The heart of this gateway is the display_manpage subroutine. It does not try to interpret the nroff codes in the manual page. Manual page style is complex enough that our best bet is to invoke nroff, which has always been used to format the pages. But there are big differences between the output generated by nroff and what we want to see on a web browser. The nroff utility produces output suitable for an old-fashioned line printer, which produced bold and underlined text by backspacing and reprinting. nroff also puts a header at the top of each page and a footer at the bottom, which we have to remove. Finally, we can ignore a lot of the blank space generated by nroff, both at the beginning of each line and in between lines.
The display_manpage subroutine starts by running the page through nroff. Then, the subroutine performs a few substitutions to make the page look good on a web browser.
sub display_manpage { local ($file, $blank, $heading); $file = join ("", $man_path, "/man", $section, "/", $manpage, ".", $section); print "Content-type: text/html", "\n\n"; print "<HTML>", "\n"; print "<HEAD><TITLE>UNIX Manual Page Gateway</TITLE></HEAD>", "\n"; print "<BODY>", "\n"; print "<H1>UNIX Manual Page Gateway</H1>", "\n"; print "<HR><PRE>";
The usual MIME header and HTML text are displayed.
open (MANUAL, "$nroff $file |");
A pipe to the nroff program is opened for output. Whenever you open a pipe, it is critical to check that there are no shell metacharacters on the command line. Otherwise, a malicious user can execute commands on your machine! This is why we performed the check at the beginning of this program.
$blank = 0;
The blank variable keeps track of the number of consecutive empty lines in the document. If there is more than one consecutive blank line, it is ignored.
while (<MANUAL>) { next if ( (/^$manpage\(\w+\)/i) || (/\b$last_line/o) );
The while loop iterates through each line in the manual page. The next construct ignores the first and last lines of each page. For example, the first and last lines of each page of the emacs manual page look like this:
EMACS(1) USER COMMANDS EMACS(1) . . . Sun Release 4.1 Last change: 1994 April 19 1
This is unnecessary information, and therefore we skip over it. The if statement checks for a string that does not contain any spaces. The previous while statement stores the current line in Perl's default variable, $_. A regular expression without a corresponding variable name matches against the value stored in $_.
if (/^([A-Z0-9_ ]+)$/) { $heading = $1; print "<H2>", $heading, "</H2>", "\n";
All manual pages consist of distinct headings such as "NAME," "SYNOPSIS," "DESCRIPTION," and "SEE ALSO," which are displayed as all capital letters. This conditional checks for such headings, stores them in the variable heading, and displays them as HTML level 2 headers. The heading is stored to be used later on.
} elsif (/^\s*$/) { $blank++; if ($blank < 2) { print; }
If the line consists entirely of whitespace, the subroutine increments the $blank variable. If the value of that variable is greater than two, the line is ignored. In other words, consecutive blank lines are ignored.
} else { $blank = 0; s//&/g if (/&/); s//</g if (/</); s//>/g if (/>/);
The blank variable is initialized to zero, since this block is executed only if the line contains non-whitespace characters. The regular expressions replace the "&", "<", and ">" characters with their HTML equivalents, since these characters have a special meaning to the browser.
if (/((_\010\S)+)/) { s//<B>$1<\/B>/g; s/_\010//g; }
All manual pages have text strings that are underlined for emphasis. The nroff utility creates an underlined effect by using the "_" and the "^H" (Control-H or \010) characters. Here is how the word "options" would be underlined:
_^Ho_^Hp_^Ht_^Hi_^Ho_^Hn_^Hs
The regular expression in the if statement searches for an underlined word and stores it in $1, as illustrated below.
This first substitution statement adds the <B> .. </B> tags to the string:
<B>_^Ho_^Hp_^Ht_^Hi_^Ho_^Hn_^Hs</B>
Finally, the "_^H" characters are removed to create:
<B>options</B>
Let's modify the file in one more way before we start to display the information:
if ($heading =~ /ALSO/) { if (/([\w\+\-]+)\((\w+)\)/) { s//<A HREF="$script\?manpage=$1§ion=$2">$1($2)<\/A>/g; } }
Most manual pages contain a "SEE ALSO" heading under which related software applications are listed. Here is an example:
SEE ALSO X(1), xlsfonts(1), xterm(1), xrdb(1)
The regular expression stores the command name in $1 and the manpage section number in $2, as seen below. Using this regular expression, we add a hypertext link to this program for each one of the listed applications. The query string contains the manual page title, as well as the section number.
The program continues as follows:
print; } } print "</PRE><HR>", "\n"; print "</BODY></HTML>", "\n"; close (MANUAL); }
Finally, the modified line is displayed. After all the lines in the file-or pipe-are read, it is closed. Figure 9.3 shows the output produced by this application.
This particular gateway program concerned itself mostly with the output of the program it invoked (nroff). You will see in this chapter that you often have to expend equal effort (or even more effort) fashioning input in the way the existing program expects it. Those are the general tasks of gateways. | http://doc.novsu.ac.ru/oreilly/web/cgi/ch09_01.htm | CC-MAIN-2018-05 | en | refinedweb |
as a pin number. For example, if an LED was attached to “GPIO17” you would specify the pin number as 17 rather than 11: leds = LEDBoard(5, 6, 13, 19, 26, pwm=True) leds.value = (0.2, 0.4, 0.6, 0.8, 1.0)
LEDBarGraph¶
A collection of LEDs can be treated like a bar graph using
LEDBarGraph:)
Note values are essentially rounded to account for the fact LEDs can only be on
or off when
pwm=False (the default).
However, using
LEDBarGraph with
pwm=True allows more precise
values using LED brightness:)()
Travis build LED indicator¶
Use LEDs to indicate the status of a Travis build. A green light means the tests are passing, a red light means the build is broken:
from travispy import TravisPy from gpiozero import LED from gpiozero.tools import negated from time import sleep from signal import pause def build_passed(repo=()
Full color, backward) # /sys/class/leds/led1 activity = LED(47) # /sys/class/leds/led0). | http://gpiozero.readthedocs.io/en/v1.3.1/recipes.html | CC-MAIN-2018-05 | en | refinedweb |
Final version is avaible here: Python source
Needs python and pygame installed.
They are avaible from here: Python PyGame
Didn't try it. Can has executable?!
It was confusing how the menu didn't use mouse controls, but the game required the mouse button.
The hit detection was strange because the ball would seem buried into the platform it was landing on, which prevented the next jump from working.
I liked how the ball would move from one side to the other through the paths created by the green things.
The mouse not working on the opening screen confused me and had to find directions to learn that the mouse made the ball jump.
No windows executable, skipping..
I also spent a while trying to figure out what key on the keyboard would make the ball jump. Consistency, people! Consistency!
The collision detection seemed to work pretty weirdly, and the game was really short. Some levels were evil when they started out and you had to pretty much instantly be ready to jump.
Having pygame troubles, might try after reinstall.
I second consistency issues - if it's a mouse game, make a mouse menu!
It's SO hard to judge theme in this compo. This game, like almost every other, is only minimal in the sense that most LD48 games are. You only had 48 hours! So you get a middle score there.
There are some real issues with the movement/collision. I fell through the first level floor a few times, and one time that caused me to win (I presume because I got to the right side before I got to the bottom of the screen), and as someone else said, sometimes you can't jump.
Need .exe, can't install pygame
Need an exe to rate.
restart at beginning of all levels made it so that I only attempted getting past level 3 a few times before giving up. As others have mentioned, the controls and collision detection were a little dodgy which lead to tough gameplay. Level 3 was rough in that you have very little reaction time till death. :)
Exe didn't work.
Traceback (most recent call last):
File "run_game.pyw", line 7, in <module>
import game
ImportError: No module named game
Decently fun. Hitting the big blobs things has weird results. Would have been cool to have more interesting things in later levels like conveyors, catapults, or moving platforms.
Argh. It's broken! So broken... Would probably have been quite fun if the physics were solid. It's just unfair when you're trying to time a jump and your ball suddenly drops through the floor or simply doesn't react to your button press. I had a nice first impression for a few seconds at the start, before I got acquainted with the collisions ;)
Traceback (most recent call last):
File "run_game.pyw", line 7, in <module>
ImportError: No module named game
Cool idea. The physics were a bit buggy but what bothered me the most was that when dropping down you really didn't have a clue whether it would be game over or whether there was a level below that you needed to drop on in order to go further. Also the menu should've been usable with the mouse if the game requires you to click, or else the game should've been playable with the keyboard too (which in this case would've been trivial to implement).
I liked the one clicking mouse interface.
Interesting but there's something not quite right about the controls as the ball doesn't always bounce an cue which makes it rather annoying.
I fell through the floor on the first level O_O. I couldn't get through level 1 until figuring out you could jump in mid-air by mashing the mouse button (a bug?)
Final version is avaible here: Python source
Needs python and pygame installed.
They are avaible from here: Python PyGame | http://ludumdare.com/compo/category/ld11/?author_name=null | CC-MAIN-2018-05 | en | refinedweb |
This Smart Gadget Is Helping People Lose Weight and Keep It Off Smart Home This Smart Gadget Is Helping People Lose Weight and Keep It Off Dan Price
Another solution is to clear your Sent messages folder, or move messages from it to another folder. Outlook Express has 2 gb limit on Sent.dbx, if you reach that size of file - then it start to send emails all over again.
#3 was the correct answer for me! Thanks a million.
We appreciate your feedback!
Paul,
could you solve your issue with Outlook? And if so, what did the trick? Please keep us updated. Thank you!
Use Thunderbird instead.
Disable your antivirus, or the mail scanning option of antivirus.
Find sent mails in the format of .dbx files, and delete them
enable to view hidden files from the folder options.
uncheck break apart messages from the advanced account properties.
1) maybe you have a malware on your pc
On your OE got to Tools>accounts>mailtab>double click the account name>advanced tab Uncheck the box marked "Break apart messages....."
import OE emails into Microsoft Outlook or another email client. Open "My Computer", go to your Documents and Settings folder, be sure that you can view hidden files, select your user profile, Local Settings, Application Data, Identities. You will see a folder with a series of alphanumeric characters, once you delete it all email will be lost if you have not transfered it to another email client. Delete this folder. Close the window. Open Outlook Express and recreate your email account, then import your messages back in and you are done. | http://www.makeuseof.com/answers/outlook-express-send-multiple-copies-emails/ | CC-MAIN-2017-34 | en | refinedweb |
One of the great things about working with EOS is the ability to script with JSON-RPC. No longer does a network admin need to do screen scraping, you can get clean, machine-friendly data from the switch using CLI syntax you’re familiar with. I’ll outline a simple example using Python.
First add jsonrpclib to your Python environment:
sudo easy_install jsonrpclib
Now we can use that library to make scripting to EOS pretty easy:
from jsonrpclib import Server switches = ["172.22.28.156", "172.22.28.157", "172.22.28.158"] username = "admin" password = "admin"
So far I’ve setup a list of switch IP addresses, and a username/password to use to login to each of them. Now let’s do something useful:
# Going through all the switch IP addresses listed above for switch in switches: urlString = "https://{}:{}@{}/command-api".format(username, password, switch) #1 switchReq = Server( urlString ) #2 # Display the current vlan list response = switchReq.runCmds( 1, ["show vlan"] ) #3 print "Switch : " + switch + " VLANs: " print response[0]["vlans"].keys() #4
Now I iterate through each of the switches in the list. On each iteration the script does the following:
1) Creates a string that defines the url to reach the API
2) Start creating a JSON-RPC request with the url
3) Finish building the JSON-RPC request and send the HTTP POST with the commands I want to run on the switch. The JSON response is stored in
response. The JSON-RPC library returns the “result” field automatically, so there is no need to parse through the boilerplate JSON-RPC reply.
4) Print out each of the VLANs configured on the switch. The response from the switch is a list, so first I grab the first (in this case only) item indexed by 0. This gives me a dictionary. Next I use the
vlans key to select an object from the dictionary. This returns another dictionary, which has the VLAN names as the keys (and details as the values). Since I want to print a list of all the VLANs, I use the
keys() method which returns a list of all the keys in the dictionary. Here is the JSON that is being parsed:
{ "jsonrpc": "2.0", "result": [ { "sourceDetail": "", "vlans": { "1": { "status": "active", "name": "default", "interfaces": { "Ethernet14": { "privatePromoted": false }, "Ethernet15": { "privatePromoted": false }, "Ethernet16": { "privatePromoted": false }, "Ethernet17": { "privatePromoted": false }, "Ethernet13": { "privatePromoted": false } }, "dynamic": false }, "51": { "status": "active", "name": "VLAN0051", "interfaces": { "Vxlan1": { "privatePromoted": false } }, "dynamic": false }, "61": { "status": "active", "name": "VLAN0061", "interfaces": { "Vxlan1": { "privatePromoted": false } }, "dynamic": false } } } ], "id": "CapiExplorer-123" }
Here’s the full script that also adds a few lines to configure a vlan:
One thought on “Arista JSON eAPI example” | https://fredhsu.wordpress.com/2014/02/13/arista-json-eapi-example/ | CC-MAIN-2017-34 | en | refinedweb |
Please help me in solving this question
Task 1. Create a class to store details of student, such as rollno, name, course joined, and fee paid so far. Assume courses are C# and ASP.NET with course fees being 2000 and 3000, respectively. (3 marks)
- Provide a constructor that takes rollno, name and course.
Provide the following methods:
Payment(amount)
feepaid += amount;
Print() : {to print rollno, name, course and feepaid}
Due amount if the student pays only 1000 as a first payment.
TotalFee -= feepaid
Declare an object S and call the above methods using
Student s = new Student(1, "John", "c#");
Task 2- Complete the program below by adding class customer that uses overloaded constructors:
A. Customer(string firstName, string lastName)
B. public Customer(string firstName)
using System; namespace CustomerApp { public class Customer { // here you need to add class members (instance variables, constructors and methods) } } Here the program where you test the Customer class. using System; namespace CustomerApp { class Program { static void Main(string[] args) { Customer customer1 = new Customer("Joe", "Black"); Customer customer2 = new Customer("Jim"); Console.WriteLine("{0} {1}", customer1.FirstName, customer1.LastName); Console.WriteLine("{0} {1}", customer2.FirstName, customer2.LastName); Console.ReadLine(); } } } | https://www.daniweb.com/programming/software-development/threads/415100/please-help-me | CC-MAIN-2017-34 | en | refinedweb |
With GTK+ 4 in development, it is a good time to reflect about some best-practices to handle API breaks in a library, and providing a smooth transition for the developers who will want to port their code.
But this is not just about one library breaking its API. It’s about a set of related libraries all breaking their API at the same time. Like what will happen in the near future with (at least a subset of) the GNOME libraries in addition to GTK+.
Smooth transition, you say?
What am I implying by “smooth transition”, exactly? If you know the principles behind code refactoring, the goal should be obvious: doing small changes in the code, one step at a time, and – more importantly – being able to compile and test the code after each step. Not in one huge commit or a branch with a lot of un-testable commits.
So, how to achieve that?
Reducing API breaks to the minimum
When developing a non-trivial feature in a library, designing a good API is a hard problem. So often, once an API is released and marked as stable, we see some possible improvements several years later. So what is usually done is to add a new API (e.g. a new function), and deprecating an old one. For a new major version of the library, all the deprecated APIs are removed, to simplify the code. So far so good.
Note that a deprecated API still needs to work as advertised. In a lot of cases, we can just leave the code as-is. But in some other cases, the deprecated API needs to be re-implemented in terms of the new API, usually for a stateful API where the state is stored only wrt the new API.
And this is one case where library developers may be tempted to introduce the new API only in a new major version of the library, removing at the same time the old API to avoid the need to adapt the old API implementation. But please, if possible, don’t do that! Because an application would be forced to migrate to the new API at the same time as dealing with other API breaks, which we want to avoid.
So, ideally, a new major version of a library should only remove the deprecated API, not doing other API breaks. Or, at least, reducing to the minimum the list of the other, “real” API breaks.
Let’s look at another example: what if you want to change the signature of a function? For example adding or removing a parameter. This is an API break, right? So you might be tempted to defer that API break for the next major version. But there is another solution! Just add a new function, with a different name, and deprecate the first one. Coming up with a good name for the new function can be hard, but it should just be seen as the function “version 2”. So why not just add a “2” at the end of the function name? Like some Linux system calls: umount() -> umount2() or renameat() -> renameat2(), etc. I admit such names are a little ugly, but a developer can port a piece of code to the new function with one (or several) small, testable commit(s). The new major version of the library can rename the v2 function to the original name, since the function with the original name was deprecated and thus removed. It’s a small API break, but trivial to handle, it’s just renaming a function (a git grep or the compiler is your friend).
GTK+ timing and relation to other GNOME libraries
GTK+ 3.22 as the latest GTK+ 3 version came up a little as a surprise. It was announced quite late during the GTK+/GNOME 3.20 -> 3.22 development cycle. I don’t criticize the GTK+ project for doing that, the maintainers have good reasons behind that decision (experimenting with GSK, among other things). But – if we don’t pay attention – this could have a subtle negative fallout on higher-level GNOME libraries.
Those higher-level libraries will need to be ported to GTK+ 4, which will require a fair amount of code changes, and might force to break in turn their API. So what will happen is that a new major version will also be released for those libraries, removing their own share of deprecated API, and doing other API breaks. Nothing abnormal so far.
If you are a maintainer of one of those higher-level libraries, you might have a list of things you want to improve in the API, some corners that you find a little ugly but you never took the time to add a better API. So you think, “now is a good time” since you’ll release a new major version. This is where it can become problematic.
Let’s say you released libfoo 3.22 in September. If you follow the new GTK+ numbering scheme, you’ll release libfoo 3.90 in March (if everything goes well). But remember, porting an application to libfoo 3.90/4.0 should be as smooth as possible. So instead of introducing the new API directly in libfoo 3.90 (and removing the old, ugly API at the same time), you should release one more version based on GTK+ 3: libfoo 3.24. To reduce the API delta between libfoo-3 and libfoo-4.
So the unusual thing about this development cycle is that, for some libraries, there will be two new versions in March (excluding the micro/patch versions). Or, alternatively, one new version released in the middle of the development cycle. That’s what will be done for GtkSourceView, at least (the first option), and I encourage other library developers to do the same if they are in the same situation (wanting to get rid of APIs which were not yet marked as deprecated in GNOME 3.22).
Porting, one library at a time
If each library maintainer has reduced to the minimum the real API breaks, this eases greatly the work to port an application (or higher-level library).
But in the case where (1) multiple libraries all break their API at the same time, and (2) they are all based on the same main library (in our case GTK+), and (3) the new major version of those other libraries all depend on the new major version of the main library (in our case, libfoo 3.90/4.0 can be used only with GTK+ 3.90/4.0, not with GTK+ 3.22). Then… it’s again the mess to port an application – except with the following good practice that I will just describe!
The problem is easy but must be done in a well-defined order. So imagine that libfoo 3.24 is ready to be released (you can either release it directly, or create a branch and wait March to do the release, to follow the GNOME release schedule). What are the next steps?
- Do not port libfoo to GTK+ 3.89/3.90 directly, stay at GTK+ 3.22.
- Bump the major version of libfoo, making it parallel-installable with previous major versions.
- Remove the deprecated API and then release libfoo 3.89.1 (development version). With a git tag and a tarball.
- Do the (hopefully few) other API breaks and then release libfoo 3.89.2. If there are many API breaks, more than one release can be done for this step.
- Port to GTK+ 3.89/3.90 for the subsequent releases (which may force other API breaks in libfoo).
The same for libbar.
Then, to port an application:
- Make sure that the application doesn’t use any deprecated API (look at compilation warnings).
- Test against libfoo 3.89.1.
- Port to libfoo 3.89.2.
- Test against libbar 3.89.1.
- Port to libbar 3.89.2.
- […]
- Port to GTK+ 3.89/3.90/…/4.0.
This results in smaller and testable commits. You can compile the code, run the unit tests, run other small interactive/GUI tests, and run the final executable. All of that, in finer-grained steps. It is not hard to do, provided that each library maintainer has followed the above steps in the good order, with the git tags and tarballs so that application developers can compile the intermediate versions. Alongside a comprehensive (and comprehensible) porting guide, of course.
For a practical example, see how it is done in GtkSourceView: Transition to GtkSourceView 4. (It is slightly more complicated, because we will change the namespace of the code from
GtkSource to
Gsv, to stop stomping on the
Gtk namespace).
And you, what is your list of library development best-practices when it comes to API breaks?
PS: This blog post doesn’t really touch on the subject of how to design a good API in the first place, to avoid the need to break it. It will maybe be the subject of a future blog post. In the meantime, this LWN article (Designing better kernel ABIs) is interesting. But for a user-space library, there is more freedom: making a new major version parallel-installable (even every six months, if needed, like it is done in the Gtef library that can serve as an incubator for GtkSourceView). Writing small copylibs/git submodules before integrating the feature to a shared library. And a few other ways. With a list of book references that help designing an Object-Oriented code and API.
Very interesting post. Unfortunately we still have applications that have not been ported from the 2.x era to 3.x. Any advice on how to handle those? Port to 3 then to 4? Skip 3 and try to go to 4 directly?
It is recommended to first port to gtk3, and then to gtk4 (once released as stable). A gtk2 application probably uses APIs that have been deprecated during gtk3 (like GtkUIManager, GtkAction, stock icons, etc). Those APIs are still present in gtk3, but have been removed in gtk4. So it’s easier to port the application first to gtk3 but by using a lot of deprecated API. Then port to the new gtk3 APIs (GAction, GMenu, for example). Then, when the application doesn’t use any deprecated API from gtk3, try to port to gtk4.
That’s what the GTK+ porting guide recommends:
But for such an application that still uses gtk2 today, I would recommend to wait GTK+ 4.0, the stable version, not 3.90, 3.92 etc.
What happens when developers can’t keep up since there is a new major stable version of GTK+ every two years? Make users install dozens of GTK+ versions in parallel?
Yes.
How about you explain people to just use ?
From semver.org:.
Except you are trying to achieve semver rules with just x.y instead of x.y.z, abusing z for minor increments here and there (for lack of proper rules on y?). An API break in semver is in fact really simple:
given 1.0.0 needs an API break, you release a 1.1.0 with the API to be broken marked as deprecated and the new API being added. Then a release later you release a 2.0.0 with the deprecated API removed and the new API the same as 1.1.0.
In your example: 3.22.0 which needs an API break becomes 3.23.0 with the API to be broken marked as deprecated and the new API being added. And then 4.0.0 with the deprecated API removed and the new API the same as 3.23.0.
In the meantime when you have (security) bugs you just increment the z of x.y.z. For example if you found a (security) bug in 3.22.0 and it got propagated to 3.23.0 and 4.0.0, and you want to fix all three those releases, then you’ll make a 3.22.1 where you JUST fixed the bug (you DID NOT change its API), you release a 3.23.0 where you JUST fixed the bug (you DID NOT change its API) and you release a 4.0.1 where you JUST fixed the bug (you DID NOT change its API).
The value of this is that our awesome packagers can make dependency rules for the packages that use our libraries for all three kinds of versions.
They can say, for example: 3.[>=22].[>0] to ensure that they get a backward compatible release of 3.23.0 that DOES NOT have the security bug. And both 3.22.1 and 3.23.1 can be selected from the package database. They can also say, for example, 4.[>=0].[>0] to get the release with API 4.0.0 that DOES NOT have the security bug.
Of course is it hard to use sensible standards, like semver.org. It’s much more easy to use not invented here syndromes.
Meanwhile the rest of the world just does semver.org.
Instead of the -alpha, -beta etc suffixes, GNOME has the difference between even and odd minor versions. Other than that, it’s true that semver.org is a good reference, and it can be applied to GNOME.
But real API breaks do happen (other than removing deprecated API), an example that I have given in the blog post is to rename foo2() -> foo(), just to get rid of the temporarily ugly name. Another example that happened in GtkSourceView is to make a GObject property construct-only; in theory another property could be created, and the first one deprecated/removed, but the API would look strange with only the new property name (because it’s hard to come up with a good name when the obvious one is already taken).
This blog post explains a little more things than semver.org wrt API breaks, especially when multiple related libraries are involved. GTK+ releases a new major version -> this has an impact on higher-level libraries, not just on applications.
I meant “you release a 3.23.1 where you JUST fixed the bug (you DID NOT change its API) and” instead of 3.23.0 for that security bug. Argh. You get the point :-) | https://blogs.gnome.org/swilmet/2016/12/10/smooth-transition-to-new-major-versions-of-a-set-of-libraries/ | CC-MAIN-2017-34 | en | refinedweb |
import "golang.org/x/exp/ebnf"
Package ebnf is a library for EBNF grammars. The input is text ([]byte) satisfying the following grammar (represented itself in EBNF):
Production = name "=" [ Expression ] "." . Expression = Alternative { "|" Alternative } . Alternative = Term { Term } . Term = name | token [ "…" token ] | Group | Option | Repetition . Group = "(" Expression ")" . Option = "[" Expression "]" . Repetition = "{" Expression "}" .
A name is a Go identifier, a token is a Go string, and comments and white space follow the same rules as for the Go language. Production names starting with an uppercase Unicode letter denote non-terminal productions (i.e., productions which allow white-space and comments between tokens); all other production names denote lexical productions.
Verify checks that:
- all productions used are defined - all productions defined are used when beginning at start - lexical productions refer only to other lexical productions
Position information is interpreted relative to the file set fset.
type Alternative []Expression // x | y | z
An Alternative node represents a non-empty list of alternative expressions.
func (x Alternative) Pos() scanner.Position
A Bad node stands for pieces of source code that lead to a parse error.
type Expression interface { // Pos is the position of the first character of the syntactic construct Pos() scanner.Position }
An Expression node represents a production expression.
type Grammar map[string]*Production
A Grammar is a set of EBNF productions. The map is indexed by production name.
Parse parses a set of EBNF productions from source src. It returns a set of productions. Errors are reported for incorrect syntax and if a production is declared more than once; the filename is used only for error positions.
type Group struct { Lparen scanner.Position Body Expression // (body) }
A Group node represents a grouped expression.
A Name node represents a production name.
type Option struct { Lbrack scanner.Position Body Expression // [body] }
An Option node represents an optional expression.
type Production struct { Name *Name Expr Expression }
A Production node represents an EBNF production.
func (x *Production) Pos() scanner.Position
A List node represents a range of characters.
type Repetition struct { Lbrace scanner.Position Body Expression // {body} }
A Repetition node represents a repeated expression.
func (x *Repetition) Pos() scanner.Position
type Sequence []Expression // x y z
A Sequence node represents a non-empty list of sequential expressions.
A Token node represents a literal.
Package ebnf imports 7 packages (graph) and is imported by 9 packages. Updated 2017-08-12. Refresh now. Tools for package owners. | http://godoc.org/golang.org/x/exp/ebnf | CC-MAIN-2017-34 | en | refinedweb |
import "golang.org/x/perf/storage/app"
Package app implements the performance data storage server. Combine an App with a database and filesystem to get an HTTP server.
app.go local.go query.go upload.go
ErrResponseWritten can be returned by App.Auth to abort the normal /upload handling.
type App struct { DB *db.DB FS fs.FS // Auth obtains the username for the request. // If necessary, it can write its own response (e.g. a // redirect) and return ErrResponseWritten. Auth func(http.ResponseWriter, *http.Request) (string, error) // ViewURLBase will be used to construct a URL to return as // "viewurl" in the response from /upload. If it is non-empty, // the upload ID will be appended to ViewURLBase. ViewURLBase string // BaseDir is the directory containing the "template" directory. // If empty, the current directory will be used. BaseDir string }
App manages the storage server logic. Construct an App instance using a literal with DB and FS objects and call RegisterOnMux to connect it with an HTTP server.
RegisterOnMux registers the app's URLs on mux.
Package app imports 17 packages (graph) and is imported by 2 packages. Updated 2017-07-14. Refresh now. Tools for package owners. | http://godoc.org/golang.org/x/perf/storage/app | CC-MAIN-2017-34 | en | refinedweb |
6.10. Numpy Pandas quiz¶
import pandas as pd import numpy as np
For the duration of this quiz, assume that pandas has been imported as
pd and numpy as
np, as in the cell above.
names2000 = pd.read_csv('names/yob2000.txt',names=['name','sex','births'])
Next assume that
names2000 is the result of the above read command.
6.10.1. Selecting columns and rows¶
In the next cell, write down what type of Python object
names2000 is
after the cell above has been executed:
[1]:
In the next cell, write an expression selecting the
sex column of
names2000:
[2]:
In the next cell wrte an expression that retrieves the fourth through
the sixth row of the
birth column of
names2000 (keeping in mind
that the second row is indexed 1):
[3]:
6.10.2. Selecting multiple columns¶
What if we just want to know the names and the birth counts, but not the
gender? Pandas makes it really easy to select a subset of the columns.
Write an expression that returns the subtable of the
names2000
dataframe that contains just the
names and the
births columns:
[4]:
When you executed the expression that showed you the subtable, it just showed you a summary. Write an expression that just returns the first 18 rows of the subtable:
[5]:
6.10.3. Numpy¶
Assume the following code has been executed:
import numpy as np x = np.array([4,3,1,0]) y =np.arange(5) z = 2 * x
Write expressions in the next cell to retrieve 0 from
x, 4 from
y, and 6 from
z:
[6]:
In the next cell, write an expression that generates a 3 by 4 array filled with zeros, and another that generates a 3 by 1 array filled with ones:
[7]:
In the next cell, write an expression that uses an assignment to a
splice to make all the even values in
a be 1. Attention: This can be
done more easily in numpy than it can in normal Python. See if you can
do it the easy way:
[ ]: a = np.arange(1,5) [8]:
In the next cell write an expression that produces an array containing result of adding 3 to each of the first 5 integers (1 - 5). There’s a hard way to do this and an easy way. The easy way uses elementwise operations:
[9]: | https://gawron.sdsu.edu/python_for_ss/course_core/book_draft/data/numpy_pandas_quiz.html | CC-MAIN-2018-09 | en | refinedweb |
I have spent most of last night and this afternoon working out how to implement a website for my local LAN that would enable use of my DVD writer from a remote host over a web interface. I need to provide a small web application that can be used to burn ISO images onto CDs or DVDs. The application should also verify the CD or DVD once it has been burnt.
Security
To start with I needed to find out how to control the CD or DVD burner from the website. There is the small problem of security here. I could not simply add the Apache user access to /dev/sr0 ( the cd device ) because then it is conceivable that anyone or any rouge application might be able to use the Apache service to monkey with my device. I had to provide some kind of abstraction which could authenticate / authorise the request prior to performing it.
Python
Python is fast becoming my favourite scripting language for working in Linux. It has some very nice libraries that makes things like network programming very easy. It also has great SYS and OS libraries that are useful for working with the native operating system and environments.
YAMI ( Yet Another Messaging Infrastructure )
YAMI makes the nuts and bolts of client server communication very easy. Read up on it here. It can be compiled with support for c/c++, java, tcl and python. I only bothered with support for python. I had to ensure that the yamipyc.so module was located in the default python search path for my machine so mod_python could find it.
Apache
It is a reactively simple procedure to add a python handler to a website. Lookup mod_python. I will just say that you can configure mod_python in the Apache config to use a specific python file to handle all python requests. In my case I used the mod_python.publisher handler which is a built-in handler that is geared for reading post and get vars as well as publishing responses. I could have done all this in PHP, but seeing as though my plan was to use python for the application layer, I thought a connector to python was the easiest.
In the background the plan is to have a python server listening for connections on a specific port. The client will send it commands and it will respond appropriately. AS the service is executed under a user with permissions to the CD device and there is authentication and sanitisation going on in front of the device, we have extra security. I also plan to implement controls on the firewall to allow only one specific machine on my LAN to connect to it.
Flow
SO here is how it should all work:
- Website posts form to python handler ( handler.py )
- Apache mod_python knows how to manage this.
- handler.py Authenticates the request
- handler.py establishes a client connection to server.py
- handler.py sends commands based on the post it has received to server.py
- server.py sanitises the commands and executes an os.system call to the device. OR it rejects the commands.
- server.py responds with status messages and results.
- handler.py receives the results or status messages and reports back to the website.
Here are the scripts: ( source code highlighting found here.)
handler.py
#!/usr/bin/env python
from mod_python import apache
from YAMI import *
import os
def eject(req):
agent = Agent()
agent.domainRegister('cdburner', '127.0.0.1', 12340, 2)
agent.sendOneWay('cdburner', 'cd', 'eject', [''])
del agent
def shutdown(req):
agent = Agent()
agent.domainRegister('cdburner', '127.0.0.1', 12340, 2)
agent.sendOneWay('cdburner', 'cd', 'shutdown', [''])
del agent
Server.py
#!/usr/bin/env python
from YAMI import *
import os
agent = Agent(12340)
agent.objectRegister('cd')
print 'server started'
while 1:
im = agent.getIncoming('cd', 1)
src = im.getSourceAddr()
msgname = im.getMsgName()
if msgname == 'eject':
print 'Ejecting'
os.system("eject")
elif msgname == 'shutdown':
print 'Shutting down'
del im
break
del im
del agent
So, a request to will call the eject function ( this functionality is provided by mod_python.publisher ) and the cd tray is ejected. ( so long as the server.py script is running. )
A request to will stop the server.py service altogether.
I will also be looking at logging and all sorts of other things.
Conclusion
I have looked at a very simple web layer to application layer messaging system provided by mod_python and the mod_python.publisher handler, and YAMI compiled with support for python. The thing to note here is that the web server can make calls to the application server ( which, incidentally can be on a different physical server ) and the application server responds to the client which then reports back to the website. All this without changing any security permissions of the underlying operating system.
Where's the third tier?
Well that's the database. Python has excellent support for databases. This application will be no different. I intend to use the python connector so that access to the database is managed by the application and not the web server. Unfortunately I only have one machine so all three tiers will be on the same physical hardware. I accept this blatant security risk because a) I am cheap and b) this is a LAN application only. It will have no access from the world wide web. I control that little nugget with a real firewall in front of my LAN. | http://david-latham.blogspot.com/2008/06/python-yami-3-tier.html | CC-MAIN-2018-09 | en | refinedweb |
A Tour of the C# Language
C# (pronounced "See Sharp") is a simple, modern, object-oriented, and type-safe programming language. C# has its roots in the C family of languages and will be immediately familiar to C, C++, Java, and JavaScript programmers. support directly these concepts, making C# a very natural language in which to create and use software components.
Several C# features aid in the construction of robust and durable applications: Garbage collection automatically reclaims memory occupied by unreachable might be compiled using the command line:
csc hello.cs
which produces an executable assembly named hello.exe. The output produced by this application when it is run is:
Hello, World
Important
The
csc command compiles for the full framework, and may not be available on all platforms. standard class libraries, which, by default, are automatically referenced by the compiler.
There's a lot more to learn about C#. The following topics provide an overview of the elements of the C# language. These overviews will provide basic information about all elements of the language and give you the information necessary to dive deeper into elements of the C# language:
- Program Structure
- Learn the key organizational concepts in the C# language: programs, namespaces, types, members, and assemblies.
- Types and Variables
- Learn about value types, reference types, and variables in the C# language.
- Expressions
- Expressions are constructed from operands and operators. Expressions produce a value.
- Statements
- You use statements to express the actions of a program.
- Classes and objects
- Classes are the most fundamental of C#'s types. Objects are instances of a class. Classes are built using members, which are also covered in this topic.
- Structs
- Structs are data structures that, unlike classes, are value types.
- Arrays
- An array is a data structure that contains a number of variables that are accessed through computed indices.
-.
- Enums
- An enum type is a distinct value type with a set of named constants.
-.
- Attributes
- Attributes enable programs to specify additional declarative information about types, members, and other entities. | https://docs.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/ | CC-MAIN-2018-09 | en | refinedweb |
Easy-Peasy Peripheral Interfacing with Pi, Python and Pmods!
The perfect combination for cooking up peripheral interfacing recipes in no time at all.
The Pmod — peripheral module — is an open specification standard by Digilent that is designed for interfacing peripherals with FPGA and microcontroller host platforms. Available in 6-pin and 12-pin versions, with a characteristically compact form factor, they provide a neat solution to system prototyping and with many engineers owning at least a small selection of Pmods.
Just as the diverse array of available Pmods makes it possible to quickly prototype a wealth of hardware configurations, the eminently approachable Python programming language and its bountiful software ecosystem make it possible to rapidly prototype applications.
Throw a Raspberry Pi into the mix also and you’ve got a powerful prototyping platform. However, there are a couple of things, on the hardware and software side, which would make integration and rapid prototyping just that bit easier. So let’s start with the former and enter the Pmod HAT!
DesignSpark Pmod HAT
The Pmod HAT (144-8419) has three 2x6-pin Pmod ports with support for I2C, SPI, UART and GPIO interfacing. It can be used with any model of Raspberry Pi that has a 40-pin GPIO connector, with power being supplied via either this or a barrel connector and external 5VDC power supply.
The ports are labelled as follows:
- JA: supports SPI and GPIO Pmods.
- JB: supports SPI and GPIO Pmods, plus 6-pin I2C Pmods on the bottom row.
- JC: supports UART and GPIO Pmods.
There are also two jumpers:
- JP1 & JP2: Enables pull-up resistors on JB2 I2C when shorted.
- JP3: Enables writing to the onboard EEPROM.
The EEPROM stores a device tree fragment which is used to identify the board and configure the O/S and drivers accordingly. For those not familiar with the device tree concept, it is a compiled database that is used to describe an ARM-based system and provides functionality similar to the BIOS of an Intel/AMD computer. This would only be modified with more advanced use cases.
So, now we have a convenient way of plumbing together the hardware that avoids having to use messy jumper wires, what about the software? Enter DesignSpark.Pmod!
DesignSpark.Pmod
DesignSpark.Pmod is a Python library that:
- Provides simple, consistent interfaces for supported Pmods
- Checks that Pmod and port capabilities match
- Checks for port usage conflicts
- Is provided with examples to get you started
Pin multiplexing is standard with many (most?) modern SoCs and it can bring great flexibility but at the expense of having to do some initial setup to configure I/O pins for your particular use. While Pmods can be interfaced via one of a number of different methods and when you combine these two things together, it does mean that you do need to take care to not e.g. connect SPI and GPIO Pmods to ports which share host I/O pins and which would, therefore, result in a setup conflict.
What the library does is to check that a Pmod is supported by a particular port on the HAT and that this use would not be in conflict. The initial release also provides convenient interfaces for 6x Pmods and the library has been written in such a way that it can be extended to support more.
Installation
First we need to make sure that SPI is enabled and this can be done using raspi-config.
$ sudo raspi-config
Selecting:
- Option 5 - Interfacing
- P4 - SPI
- Enable → YES
Next, we need to install a few Raspbian build dependencies:
$ sudo apt-get update $ sudo apt-get install python-pip python-dev libfreetype6-dev libjpeg-dev build-essential
Finally, we can use the Python package manager to install DesignSpark.Pmod and dependencies:
$ sudo -H pip install designspark.pmod
Note that the official docs can be found at:
This website should always be referred to for the latest documentation.
The associated PyPi project page and GitHub development repository are located at:
Interfacing with Pmods
At the time of writing six Pmods are supported by the library and next, we’ll take a quick look at these and basic methods for interfacing with them.
PmodAD1
PmodAD1 (134-6443) is a two channel 12-bit ADC that features Analog Devices’ AD7476A, with a sampling rate of up to 1 million samples per second, and a 6-pin SPI Pmod interface.
To read from this we simply need to import the library, create an object with the AD1 module on a suitable port, then use the readA1Volts() method to get an ADC reading. E.g.:
from DesignSpark.Pmod.HAT import createPmod adc = createPmod('AD1','JBA') volts = adc.readA1Volts() print(volts)
In this example using port “JAA”, which is the top row of the 2x6 pin JA connector.
Note that at the present time only ADC channel A1 is supported due to the way that SPI is configured when used with a Pi.
PmodHB3
The PmodHB3 (Digilent part no.410-069) is a 2A H-bridge circuit for DC motor drive up to 12V, with a 6-pin GPIO interface.
Once again we import the library and then to spin the motor in the forward direction simply:
motor = createPmod('HB3','JAA') motor.forward(20)
The number passed to the forward method is the duty cycle. There are also methods to spin in the reverse direction, stop, change the PWM frequency and clean up.
PmodISNS20
Like the AD1, the Pmod ISNS20 (136-8069) is another 6-pin SPI Pmod, but this time a ±20A DC or AC input, high accuracy current sensor. After importing the library, to read from this we would:
isens = createPmod('ISNS20','JBA') mA = isens.readMilliAmps() print(mA)
Simple.
MIC3
By now you should be used to this and to read an integer value from the PmodMIC3 (Digilent part no.410-312) ADC we would import the library, following which:
mic = createPmod('MIC3','JBA') int = mic.readIntegerValue() print(int)
OLEDrgb
The PmodOLEDrgb (134-6481) is an organic RGB LED module with a 96×64 pixel display and that is capable of 16-bit colour resolution. It is the first 12-pin Pmod we will have encountered. It is also the first which will require a couple of additional libraries for use.
To draw “Hello, World!” in a bounding box we would simply:
from DesignSpark.Pmod.HAT import createPmod from luma.core.render import canvas from luma.oled.device import ssd1331 oled = createPmod('OLEDrgb','JA') device = oled.getDevice() with canvas(device) as draw: draw.rectangle(device.bounding_box, outline="white", fill="black") draw.text((16,20), "Hello, World!", fill="white") while True: pass
The luma.core and luma.oled libraries provide a lot of really great functionality that can be used with this Pmod and checking out the associated documentation is highly recommended.
TC1
Finally, PmodTC1 (134-6476) is another 6-pin SPI Pmod, this time featuring a cold-junction thermocouple-to-digital converter designed for a classic K-Type thermocouple wire. The wire provided with the module has an impressive temperature range of -73°C to 482°C.
After importing the library to read from this we would:
therm = createPmod('TC1','JBA') cel = therm.readCelcius() print(cel)
Complete basic examples for each of the above Pmods, along with a couple that is slightly more advanced — which includes an analog clock face example for the OLEDrgb module — are provided together with API documentation via Read the Docs.
Wrapping up
So there we have it, thanks to the DesignSpark Pmod HAT and supporting library, we can now interface Pmods and prototype applications using a Raspberry Pi and Python faster than ever before. It should also now be reasonably straightforward to add support for additional Pmods to the library and if you would like to contribute support for a new module get in touch.
December 2, 2017 13:43
Great article! Definitely investing the time to explore soon.... | https://www.rs-online.com/designspark/easy-peasy-peripheral-interfacing-with-pi-python-and-pmods | CC-MAIN-2018-09 | en | refinedweb |
My post introducing the .NET Micro Framework covered how to use the OutputPort class to interface to a single GPIO output pin as part of an example to blink a LED.
The matching class to interface to a GPIO input pin, is not too surprisingly called the InputPort class. The InputPort class functions very similar to the OutputPort class discussed last time, in fact they share a common base class.
The constructor for the InputPort class has a couple of additional parameters, in addition to the one which specifies which GPIO pin it should be connected to.
The first is a boolean parameter which enables a glitch filter. Mechanical switches can be “noisy”, meaning that a single press by the user could translate into multiple open and close events, which digitial hardware can potentially detect. This problem is commonly refered to as Contact Bounce or Switch Debouncing. At this stage I have not current managed to find out what technique the .NET Micro Framework utilises for glitch filtering, or even if this is device specific (I suspect it is).
The second parameter is of much more interest, since it deals with setting the type of internal pull up resistor present on the GPIO pin. It can have one of following values from the ResistorMode enumeration:
- ResistorMode.PullUp – A resistor internal to the CPU is connected between the GPIO pin and VCC, i.e. the pin is pulled up to the positive supply rail.
- ResistorMode.PullDown – A resistor internal to the CPU is connected between the GPIO pin and GND, i.e. the pin is pulled down to ground.
- ResistorMode.None – no internal resistor is enabled. In this mode if a pin is left unconnected, it could produce spurious readings due to noise induced into the pin.
The pull up and pull down resistor modes can be handy when interfacing to external hardware, in particular push buttons. By relying upon the internal pull up or pull down resistors, you can get by without requiring additional components, as shown in the following schematics.
It is important to note that if a push button is connected with a pull up resistor, it’s logic levels will be inverted. I.e. the GPIO pin will read a logic high (true) logic level when the push button is not pressed, and will read a logic low (false) logic level when the push button is pressed.
Code Sample
Here is a sample program which will write a message to the debug window each time a push button connected to the GPIO_Pin3 pin is pressed.
To reduce the amount of log messages written to the Visual Studio output window, we only sample the push button once every second. This means we do not need to enable the glitch filter because we are sampling it at a slow enough rate that it should not be a significant issue.
using Microsoft.SPOT; using Microsoft.SPOT.Hardware; using System.Threading; namespace InputTestApplication { public class Program { public static void Main() { // Monitor the "select" push button InputPort button = new InputPort(Cpu.Pin.GPIO_Pin3, false, Port.ResistorMode.PullDown); while (true) { // Test the state of the button if (button.Read()) Debug.Print("The button is pressed"); // Wait 1 second before sampling the button again Thread.Sleep(1000); } } } }
NOTE: The GPIO pin used for this sample program has been selected for use with the .NET Micro Framework Sample Emulator. If you attempt to run this application on your own .NET Micro Framework module, you may need to adjust the GPIO signal utilised to suit your hardware.
Push Buttons within the Emulator
The “Sample Emulator” released with the .NET Micro Framework SDK has 5 push buttons “wired up” into a D-PAD configuration.
An interesting aspect to the emulated hardware is that the push buttons can be made to act as if there were wired up with pull up or pull down resistors (as outlined above) depending upon the state of the ResistorMode parameter passed into the constructor of the InputPort instance which accesses them. Typical hardware wouldn’t have this flexibility, with the incorrect ResistorMode choice potentially rendering a push button unreadable.
The mapping of emulator buttons to GPIO pins for the Sample Emulator is as follows:
- Select – GPIO_Pin3
- Up – GPIO_Pin2
- Down – GPIO_Pin4
- Left – GPIO_Pin0
- Right – GPIO_Pin1
The code sample provided in this blog posting constantly polls the state of the push button. This is not power efficient. It is better to request that the hardware notifies you whenever the GPIO pin changes state. Next time I will discuss how you can use the InterruptPort class to achieve this.
Hi,
I like your article. I am working on something similar to what you have mentioned above in your article.
I am basically using TMS320DM355 TI chip and connected it to a display. Now I want a GPIO button press, to do brigtness changes in the display.Can you outline me on how to go about this?
Thanks, any help will be appreciated. | http://www.christec.co.nz/blog/archives/56 | CC-MAIN-2018-09 | en | refinedweb |
Selfie Plugin
Dependency:
compile "org.grails.plugins:selfie:0.6.6"
Summary
Selfie is a Grails Image / File Upload Plugin. Use Selfie to attach files to your domain models, upload to a CDN, validate content, produce thumbnails.
Installation
repositories { mavenRepo '' } plugins { compile ':selfie:0.3.0' }
Description
Features
- Domain Attachment
- CDN Storage Providers (via Karman)
- Image Resizing (imgscalr)
- Content Type Validation
- GORM Bindings / Hibernate User Types Support
ConfigurationSelfie utilizes karman for dealing with asset storage. Karman is a standardized interface for sending files up to CDN's as well as local file stores. It is also capable of serving local files. In order to upload files, we must first designate a storage provider for these files. This can be done in the `attachmentOptions` static map in each GORM domain with which you have an Attachment, or this can be defined in your @[email protected]
The
grails { plugin { selfie { storage { bucket = 'uploads' providerOptions { provider = 'local' // Switch to s3 if you wish to use s3 and install the karman-aws plugin basePath = 'storage' baseUrl = '' //accessKey = "KEY" //Used for S3 Provider //secretKey = "KEY" //Used for S3 Provider } } } } }
providerOptionssection will pass straight through to karmans
StorageProvider.create()factory. The
providerspecifies the storage provider to use while the other options are specific to each provider.In the above example we are using the karman local storage provider. This is all well and good, but we also need to be able to serve these files from a URL. Depending on your environment this can get a bit tricky. One option is to use nginx to serve the directory and point the
baseUrlto the appropriate endpoint. Another option is to use the built in endpoint provided by the karman plugin:
This will provide access to files within the `storage` folder via the
grails { plugin { karman { serveLocalStorage = true serveLocalMapping = 'storage' // means /storage is base path storagePath = 'storage' } } }
storageurl mapping.
UsageThe plugin uses an embedded GORM domain class to provide an elegant DSL for uploading and attaching files to your domains. So make sure you define your `static embedded= (+)` when using the Attachment class. Example DSL:
Uploading Files could not be simpler. Simply use a multipart form and upload a file:
import com.bertramlabs.plugins.selfie.Attachment import com.bertramlabs.plugins.selfie.AttachmentUserTypeclass Book { String name Attachment photo static attachmentOptions = [ photo: [ styles: [ thumb: [width: 50, height: 50, mode: 'fit'], medium: [width: 250, height: 250, mode: 'scale'] ] ] ] static embedded = ['photo'] //required static mapping = { } static constraints = { photo contentType: [‘png’,’jpg’], fileSize:1024*1024 // 1mb } }
When you bind your params object to your GORM model, the file will automatically be uploaded upon save and processed:
<g:uploadForm <g:textField<br/> <input type="file" name="photo" /><br/> <g:submitButton<br/> </g:uploadForm>
class PhotoController { def upload() { def photo = new Photo(params) if(!photo.save()) { println "Error Saving! ${photo.errors.allErrors}" } redirect view: "index" } } | http://www.grails.org/plugin/selfie?skipRedirect=true | CC-MAIN-2018-09 | en | refinedweb |
Tech Off Thread4 posts
Forum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
HTTP Download
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.
Hi.
Use
using System.Net;
...
WebClient wc = new WebClient();
wc.DownloadFile(httpHostName, localFileName);
Under the hood this is using the same mechanism to download the files as Internet Explorer (WinInet/WinHttp).
And in case WebClient is not enough (rarely the case), you can always resort to HttpWebRequest.
also you may need to check on buffer sizes ..... i bet the browser does some tuning that your app does not do.
for example if the http stream is coming back with 4 meg chunks then the client buffer size should be some multiple of that value so that it does not have to alloc more storage while downloading.
stuff like that can make a huge difference in perf. | https://channel9.msdn.com/Forums/TechOff/HTTP-Download | CC-MAIN-2018-09 | en | refinedweb |
could you please help me out?
I'm using Joomla 1.5.9, mWA V0.9 and I couldn't get ScribeFire to work. The automatically Account Assistant worked great. The API path is correct. But, after the assistant has finished the message: "Server answered incorrect". If I try in Python to connect to the Server:
Code: Select all
import xmlrpclib
client=xmlrpclib.ServerProxy("")
client.metaWeblog.getPost("1","admin","password")
I receive a 802 Access Denied.
Do I have to change usersettings in Joomla for enabling xmlrpc access? | https://forum.joomla.org/viewtopic.php?f=470&t=262269&sid=061c1bf12e2b75aac89e35d3e1ef42ac&start=90 | CC-MAIN-2018-09 | en | refinedweb |
Window size adjustment on Smartphone (Xperia Z2)
Hello there,
since yesterday i have my first smartphone (i know, hard to believe), it´s a Xperia Z2 running on Android.
With the help of the mighty internet I already could set up the toolchain to deploy Apps from desktop PC to Smartphone.
I have trouble to adjust the proportions of the widgets. I really mean proportions, not absolute size, since i know it would depend on resolution.
The proportions are completely different compared to when running on the desktop PC.
I guess it´s something fundamental i´m not considering. Any suggestions?
best regads,
Moe
PS.: had no ideas about useful keywords to search for...
Hi,
Can you show an image of what happens ?
- Flaming Moe
How can i load a photo? Or is it only possible via a link?
best regards,
Moe
- Flaming Moe
Ok, i´m home and loaded the fotos in DropBox
As you can see in the picture, the red backround of the ui only exits under the slider and numbers. Also the black rectangle at the top has allways the same size and positions, no matter which values i use in setGeometry();
The "App" itself
The ui
The MainFile:
#include <QTimer> #include <QWidget> #include <QApplication> #include "acc.h" #include <QVBoxLayout> #include <QScreen> #include <QGuiApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); QScreen* screen = QApplication::screens().at(0); QWidget w; QWidget* testWidget = new QWidget; testWidget->setGeometry(0, 0, 50, 20); testWidget->setStyleSheet("background-color:black"); //w.setBaseSize(100, 100); QTimer* timer = new QTimer(); ACC* acc = new ACC(); acc->screen = screen; QVBoxLayout* layout; layout = new QVBoxLayout(&w); layout->addWidget(testWidget); layout->addWidget(acc); QObject::connect(timer, SIGNAL(timeout()), acc, SLOT(triggerCap())); timer->start(10); w.show();//acc->show(); a.exec(); return 1; }
[edit: added missing coding tags: 3 ` before and after SGaist]
You put testWidget in a layout, that's why it won't have the geometry you set on it.
How are you setting the red background ?
The background is set by a stylesheet "background-color:red" in the creator.
Also when i set this by code, only the elements are underlayed with red.
May be it´s some kind of property i need to set? I parsed them, but i´m not sure.
In between i made by code a widget with yellow background which contains 3 numbers.
This works out. So i think it has something to do with the Widget Designer, some corresponding properties...
Did you set the red background on each separated widget or on the containing widget ?
On the containing widget.
I solved it by a workaround. That means, i don´t know if it´s a workaround, main thing is, it works.
In the QtDesigner I put a normal blank widget on the Form at first and on top of that Widget the slider and Numbers. Now it´s solid painted.
First i tried something with "setBackgroundRole(QPalette::ColorRole role)"
It think I may have understood your problem. You didn't put your separated widgets in a layout ? | https://forum.qt.io/topic/52066/window-size-adjustment-on-smartphone-xperia-z2/9 | CC-MAIN-2018-09 | en | refinedweb |
Add an Auto-Incrementing Build-Number to Your Build Process
When building software it's often useful to give each iteration of your build process a unique number. Many IDEs and RAD tools do this for you automatically. If yours doesn't and you're using a make file to build your code you can add an auto-incrementing build number to your project with a few simple changes to your make file.
The mechanism presented here does not need to modify your source code at all, it uses linker symbols to add the build number to your program. Note however that you will probably want to modify your source code to display the build number, but that's not strictly necessary.
Let's start with the following simple make file for building a program:
# Makefile OBJECTS=bnum.o a.out: $(OBJECTS) $(CC) $(LDFLAGS) -o $@ $(OBJECTS)
To add the build number to the make file we set the variable BUILD_NUMBER_FILE to the name of a file that will contain our build number value. Then we add BUILD_NUMBER_FILE to the dependencies for a.out, add BUILD_NUMBER_LDFLAGS to the flags used when linking the program, and finally include the file buildnumber.mak at the end of the make file. The converted make file looks like:
# Makefile # Name of text file containing build number. BUILD_NUMBER_FILE=build-number.txt OBJECTS=bnum.o a.out: $(OBJECTS) $(BUILD_NUMBER_FILE) $(CC) $(LDFLAGS) $(BUILD_NUMBER_LDFLAGS) -o $@ $(OBJECTS) # Include build number rules. include buildnumber.mak
The included file buildnumber.mak looks like:
# Create an auto-incrementing build number. BUILD_NUMBER_LDFLAGS = -Xlinker --defsym -Xlinker __BUILD_DATE=$$(date +'%Y%m%d') BUILD_NUMBER_LDFLAGS += -Xlinker --defsym -Xlinker __BUILD_NUMBER=$$(cat $(BUILD_NUMBER_FILE)) # Build number file. Increment if any object file changes. $(BUILD_NUMBER_FILE): $(OBJECTS) @if ! test -f $(BUILD_NUMBER_FILE); then echo 0 > $(BUILD_NUMBER_FILE); fi @echo $$(($$(cat $(BUILD_NUMBER_FILE)) + 1)) > $(BUILD_NUMBER_FILE)
The linker flags cause the linker to create two symbols: __BUILD_NUMBER and __BUILD_DATE which will be equal to the build number and the build-date respectively. The build-date is set using the standard date command. The build number is simply the value contained in the build number file, which is extracted using the standard cat command.
The make rule for the build number file depends on all the project object files and if any of them changes the build number is incremented by executing the following commands:
if ! test -f build-number.txt; then echo 0 > build-number.txt; fi echo $(($(cat build-number.txt) + 1)) > build-number.txt
The test program bnum.c merely writes out the build number and build-date:
#include <stdio.h> extern char __BUILD_DATE; extern char __BUILD_NUMBER; main() { printf("Build date : %u\n", (unsigned long) &__BUILD_DATE); printf("Build number: %u\n", (unsigned long) &__BUILD_NUMBER); }
A sample of iterative builds is shown below:
$: 24 $: 25
One caveat to an auto-incrementing build number is that just because you have two versions of a program with different build numbers it does not mean they are functionally different. If you routinely run make clean; make all just for fun, you'll get a new build number but nothing will have changed.
Mitch Frazier is an Associate Editor for Linux Journal.
I'm working on a large
I'm working on a large project with multiple modules. We would like to have an incrementing build number for each module but the individual modules don't do any linking in the make file so I'm having trouble setting this up. Do you have any advice to be able to use this without linking or to link from the main make file and increment the individual modules build number when they are built?
Perhaps CPP
You could do a similar thing using the -D option of the C compiler, e.g.
and then have an include file something like this that gets included in every file:
That would give each object file a build number and date. You could also add the file name and put all the version numbers into a separate data section so that all of the build info is in one place.
Consider the following structure:
Change the include file to:
Now you can modify your link script so that all the ".build_info" sections get put together in one place. Add a symbol at the start of the section so that you can obtain the address of the section. Also add a zero at the end of the section so you have a sentinel. With this you can now get a pointer to an array of all the build_info_t structures and print them out if you wanted to know when things were built:
Note, I haven't actually tested the above so I may have some syntax wrong or may have overlooked something important. For one thing check to make sure that optimization doesn't remove the entire structure since it's static and unused.
Mitch Frazier is an Associate Editor for Linux Journal.
What does the -D option do?
What does the -D option do? I'm using an older gcc (4.1.2) and can't find any documentation with the -D option..
-D Option
The -D option does the same thing as #define. It's documented on the gcc man page. Every C compiler I've ever seen has it, and it's one of the most commonly used options when compiling C code.
Mitch Frazier is an Associate Editor for Linux Journal.
Valid for a shared library
Hello,
Imagine the same solution for a shared library. I tested it and it does not work. It seems that it is unable to get the address of the symbol...
when doing nm mylib.so, I have my symbol __BUILD_LIBVER but no way to get it.
Any idea to solve this issue?
Best Regards,
Pascal
Shared Library
When you say "It seems that it is unable to get the address of the symbol", what do you mean? Does the linking fail or the linking succeeds but the value is wrong?
Have you tried adding a function to your library that returns the value? Does that work?
Mitch Frazier is an Associate Editor for Linux Journal.
Thanks for replying... I have
Thanks for replying... I have headache about that! :)
First I want to say your solution is my prefered one compared to a script updating a header file with #define BUILD_NUM XXXX because changing header file changes the dependencies and then need to compile it again for no reason... In complex makefile, I find it difficult to do it. Anyway, it's another discussion...
First I have to say I tested your solution on a standard binary. It works fine.
But I have in charge of a "big" software using a lots of shared libraries. And I would like to use your solution in every artifacts I build (binaries, shared libraries).
So I did a basic test project aiming at verifying it's ok...
and here's the results:
Best Regards
Pascal
Shared Library
You say "random value" in:
Is it random in that it comes out differently each time you run it or is it always the same but wrong? (in which case it's not random)
I suspect the value you're getting is the load address of the shared library plus the value of the symbol. If you take your value "7610387" and convert it to hex you get "742013" which is "742000" plus your version number.
Mitch Frazier is an Associate Editor for Linux Journal.
End of story
You are right.
I created a symbol with a fixed value as a reference and I can retrieve my version from shared libraries. Maybe not the most elegant way but it works!
Thank you so much.
Can you pls elaborate on what
Can you pls elaborate on what you mean by "created a symbol with a fixed value as a reference". I need the value *without* the so load address.
thanks
Fixed Symbol
Use the same method to create a symbol whose value is always the same (zero probably works best), for example:
Now in your code when you want the value of __BUILD_NUMBER, use the value of __LOAD_OFFSET to "remove" the load offset of the shared library, e.g.
Mitch Frazier is an Associate Editor for Linux Journal.
I personally don't like this
I've done a fair amount of work in trying to verify what source object code came from, and guaranteeing that a build of that same source at another point in time will produce identical output. This is largely of interest to a distribution that wants to make sure their source packages recompile as expected.
This sort of "feature" is quite annoying in doing that. lots of packages do similar things to this. There is even a gnu cpp compiler macro that you can use (if forget what it is).
The problem is that identical source input starts to produce binary output that differs every time a second ticks off the clock.
just my 2 cents.
Avoid a circular reference
This is a nice tip.
However, make will complain about a circular reference.
Instead of:
OBJECTS=bnum.o
You may want to do this:
SOURCES=bnum.c
OBJECTS=$(SOURCES:.c=.o)
Then - (note dependency change)
$(BUILD_NUMBER_FILE): $(SOURCES)
@if ! test -f $(BUILD_NUMBER_FILE); then echo 0 > $(BUILD_NUMBER_FILE); fi
@echo $$(($$(cat $(BUILD_NUMBER_FILE)) + 1)) > $(BUILD_NUMBER_FILE)
This will avoid a circular reference. However, you only get a build number increment for source file changes.
Doesn't seem to complain here
I don't get any complaints from make. Did you actually test this and get a complaint from make?
Seems like BUILD_NUMBER_FILE should depend on the same thing that linking depends on, so that every time you re-link you get a new version number. Not relevant for this example, but in more complex cases, depending on the sources would miss changes in header files.
Mitch Frazier is an Associate Editor for Linux Journal. | http://www.linuxjournal.com/content/add-auto-incrementing-build-number-your-build-process?quicktabs_1=2 | CC-MAIN-2018-09 | en | refinedweb |
FcStrStrSection: (3)
Updated: 16 April 2007
Index Return to Main Contents
NAMEFcStrStr - locate UTF-8 substring
SYNOPSIS
#include <fontconfig.h>
FcChar8 * FcStrStr (const char *s1, const char *s2);
DESCRIPTION
Returns the location of s2 in s1. Returns NULL if s2 is not present in s1. This test will operate properly with UTF8 encoded strings, although it does not check for well formed strings.
VERSION
Fontconfig version 2.4.2 | http://www.thelinuxblog.com/linux-man-pages/3/FcStrStr | CC-MAIN-2018-09 | en | refinedweb |
bower install angular-lazy-forHome Demo Download GitHub
lazyFor
lazyFor is an Angular 2+ directive that can be used in place of
ngFor. The main difference is that
lazyFor will only render items when they are visible in the parent element. So as a user scrolls, items that are no longer visible will be removed from the DOM and new items will be rendered to the DOM.
Sample Usage
Plunker Demo
Install with
npm install --save angular-lazy-for
app.module.ts
import {NgModule} from '@angular/core'; import {LazyForModule} from 'angular-lazy-for'; @NgModule({ declarations: [/*...*/], imports: [ //... LazyForModule ], providers: [/*...*/], bootstrap: [/*...*/] }) export class AppModule { }
Template Input
<ul style="height: 30px; overflow-y: auto"> <li * {{item}} </li> </ul>
DOM Output
<ul> <li style="height: 20px"></li> <li>3</li> <li>4</li> <li>5</li> <li style="height: 10px"></li> </ul>
When to use
lazyFor
- When you know the size of the iterable and you only want to create DOM elements for visible items
- Fix performance issues with page load time
- Fix change detection performance issues
When not to use
lazyFor
- Not meant to replace
ngForin all cases. Only use
lazyForif you have performance issues
- Not an infinite scroll. don't use it if you don't know the total size of the list
- Doesn't currently support loading items asynchronously. Although support for this may be added in the future
- This directive does some DOM manipulation so it won't work if your Angular app runs in a web worker or if you use Angular Universal
Performance
lazyFor can improve performance by preventing unnecessary content from being rendered to the DOM. This also leads to fewer bindings which reduces the load on change detection. Using
ngFor is usually very fast but here is a casae where it has a noticeable performance impact:
Plunker Performance Demo
Optional Parameters
withHeight
This directive will try to figure out the height of each element and use that number to calculate the amount of spacing above and below the items. If you are having issues with the defualt behaviour you can specify an explicit height in pixels.
<div *</div>
inContainer
lazyFor needs to know which element is the scrollable container the items will be inside of. By default it will use the parent element but if this is not the right element you can explicitly specify the container.
<div style="overflow: auto" #myContainer> <div> <div *</div> </div> </div>
withTagName
This directive works by creating an empty element above and below the repeated items with a set height. By default these buffer elements will the use the same type of tag that
lazyFor is on. However you can specify a custom tag name with this parameter if needed.
Template
<ul> <li *</li> <ul>
DOM Output
<ul> <div height="..."></div> <li></li> <li></li> <li></li> <div height="..."></div> <ul>
Related posts:
Javscript
Javascript plugin
Angular-js | https://angular-js.in/angular-lazy-for/ | CC-MAIN-2019-04 | en | refinedweb |
This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.
On Thu, 5 Jun 2003, Jason Tishler wrote: > Elfyn, > > On Thu, Jun 05, 2003 at 05:45:40AM +0100, Elfyn McBratney wrote: > > PS- Peter, `mv {cyg,lib}xml2mod.dll' does the trick. > > If maintaining the "cyg" prefix is considered important, then there is > another solution... It's not really important to me, but if the DLL is built as cygfoobar the the init function, IMO, should be initcygfoobar. Also, I can remove that rename'age in my build scripts now. :-) > If a Python shared extension module is called "xyzfoo.$SO" (where $SO is > so, .dll, etc.), then it must export an initialization function called > "initxyzfoo". Therefore, cygxml2mod.dll must export "initcygxml2mod" > not "initlibxml2mod": > [...] > > void > #ifdef __CYGWIN__ > initcygxml2mod(void) > #else > initlibxml2mod(void) > #endif /* __CYGWIN__ */ > { > ... > } > > And similarly for libxslt. Thanks Jason! Elfyn -- Elfyn McBratney (mailto:spambot@is.ubertales.co.uk) Systems Administrator ABCtales.com / Ubertales.co.uk -- Unsubscribe info: Problem reports: Documentation: FAQ: | https://cygwin.com/ml/cygwin/2003-06/msg00400.html | CC-MAIN-2019-04 | en | refinedweb |
Custom Actions
Our modules are built on an open configuration architecture, that makes them very configurable. We have developed it in this way because we know how important is for you to have the ability to customize it with new functionalities which suits you needs.
In the example presented below, we are going to show you how to create your custom action.
- Open
Visual Studioand create a new project.
- In the
Solution Explorerpanel, expand References, select
Browseand add a new reference to
DnnSharp.Common.dllin your /bin folder of the site.
- Add:
using DnnSharp.Common; using DnnSharp.Common.Actions;
- Implement the
IActionImplinterface which exposes two methods named
Executeand
Initwith the following signatures:
public IActionResult Execute(ActionContext context); public void Init(StringsDictionary actionTypeSettings, SettingsDictionary actionSettings);
The names are self-explanatory, the
Executemethod is where you will put all your code needed to implement your desired behavior for the action needed while the
Initmethod will be used to initialize different variables needed in the code of the method.
After implementing the interface
IActionImplit would be a good idea to set your properties with their respective attributes. Since we are talking about custom actions you will use the
ActionParameterattribute followed by the properties it needs. The properties that will be used are as follows:
ApplyTokens: This enables your parameters to be automatically tokenized.
IsRequired: This will specify that a parameter is required for your action to properly function. Side note, our code will throw errors if a parameter is empty at runtime, however if this parameter becomes empty after the tokenization process, the action will continue executing.
RequiredMessage: This is the exception message that will be thrown if a field that is set as required is empty.
IsOutputToken: This attribute should be set to
truefor the action’s output parameters. The config parameter type should be
Textand the C# property should be of type
string. What this does is to cleanup square brackets from the value and make sure that no tokenization is applied to the value.
Finally, click Build, and in the /bin/Debug folder in the
MYCUSTOMACTIONproject you will find a dll file named
MyCustomAction.dll. Copy this dll to the /bin of your site.
Now that you have created your custom action, you also need to write a
Config file for it. This file defines how to bring the parameters from the page all the way to your code.
It is much easier to understand this part if you have the example config open and follow it as it is explained.
The
Configfile is written in
JSONformat, more specifically it can contain configs for multiple actions, so it is a
JSONarray.
- Inside square brackets you will have to define everything that the action needs to function and you must start by opening a pair of curly braces inside of which you will define the
Idof the function which must be unique and not necessarily the class name , the
Titleand the
HelpText(these properties are
JSONobjects of
LocalizedContenttype they support multiple languages as keys followed by the desired text as a value. You can also write a
defaultoption).
TypeStrproperty specifies the class to be used for the action and has the following format:
NameSpace.ClassName, DllName. This tells us where to look for your action.
Settingsproperty defines various specifics about your action including the
Groupproperty.
Groupproperty specifies which group of actions your custom action will appear under, for example
Messagesgroup.
Parametersproperty specifies which parameters from the frontend you want to pass on to the back-end. There are various types of parameters with their respective settings. There is one basic setting for parameters which all of them have:
ShowCondition.
ShowConditiondefines a
JavaScriptcondition that shows/hides the parameter. You can access other parameter values of the element using the
itemParametersvariable. Example:
itemParameters['\<parameter name\>']
Additionally the
Gridsetting
Columnsis essentially an array of other types of parameters.
- The
Columnssetting of the
Gridparameter, as specified above, is an array. Moreover it is an array that contains parameter objects, basically each column field type is specified in the same way as a parameter. There are a few special settings that can be set on all parameter types when they are part of the
Columnssetting on the
Gridparameter type:
- The attached zip archive named
MYCUSTOMACTION.ZIPcontains much more information and a configuration file for your action. You can use it as a start for creating your own action.
Also, copy the Config folder contained in the archive to the
/DesktopModules/DnnSharp/Common folder in your site folder.
Refresh your admin page and you should see the new Group containing your example action.
In this example you should select a value for the dropdown when you configure the action on the site, as some of our modules (such as
DnnApiEndpoint) require a value for the dropdowns to be selected. | https://docs.dnnsharp.com/actions/extensibility.html | CC-MAIN-2019-04 | en | refinedweb |
Chapter 12. Support for Object-Oriented Programming ISBN
- Kerrie Edwards
- 2 years ago
- Views:
Transcription
1 Chapter 12 Support for Object-Oriented Programming ISBN
2 Chapter 12 Topics Introduction Object-Oriented Programming Design Issues for Object-Oriented Languages Support for Object-Oriented Programming in Smalltalk Support for Object-Oriented Programming in C++ Support for Object-Oriented Programming in Java Support for Object-Oriented Programming in C# The Object Model of JavaScript Implementation of Object-Oriented Constructs Copyright 2012 Addison-Wesley. All rights reserved. 1-2
3 Introduction Many object-oriented programming (OOP) languages Some support procedural and data-oriented programming (e.g., Ada and C++) Some support functional program (e.g., CLOS) Newer languages do not support other paradigms but use their imperative structures (e.g., Java and C#) Some are pure OOP language (e.g., Smalltalk) Copyright 2012 Addison-Wesley. All rights reserved. 1-3
4 Object-Oriented Programming Abstract data types Inheritance Inheritance is the central theme in OOP and languages that support it Polymorphism Copyright 2012 Addison-Wesley. All rights reserved. 1-4
5 Inheritance Productivity increases can come from reuse ADTs are difficult to reuse All ADTs are independent and at the same level Inheritance allows new classes defined in terms of existing ones, i.e., by allowing them to inherit common parts Inheritance addresses both of the above concerns--reuse ADTs after minor changes and define classes in a hierarchy Copyright 2012 Addison-Wesley. All rights reserved. 1-5
6 Object-Oriented Concepts ADTs are called classes Class instances are called objects A class that inherits is a derived class or a subclass The class from which another class inherits is a parent class or superclass Subprograms that define operations on objects are called methods Copyright 2012 Addison-Wesley. All rights reserved. 1-6 Copyright 2012 Addison-Wesley. All rights reserved. 1-7 Copyright 2012 Addison-Wesley. All rights reserved. 1-8 2017 Pearson Education, Ltd. All rights reserved. 1-9 Copyright 2012 Addison-Wesley. All rights reserved. 1-10 Copyright 2012 Addison-Wesley. All rights reserved. 1-11
12 Dynamic Binding Concepts An abstract method is one that does not include a definition (it only defines a protocol) An abstract class is one that includes at least one virtual method An abstract class cannot be instantiated Copyright 2012 Addison-Wesley. All rights reserved. 1-12
13 Design Issues for OOP Languages The Exclusivity of Objects Are Subclasses Subtypes? Single and Multiple Inheritance Object Allocation and Deallocation Dynamic and Static Binding Nested Classes Initialization of Objects Copyright 2012 Addison-Wesley. All rights reserved. 1-13 Copyright 2012 Addison-Wesley. All rights reserved. 1-14 Copyright 2012 Addison-Wesley. All rights reserved. 1-15
16 Type Checking and Polymorphism Polymorphism may require dynamic type checking of parameters and the return value Dynamic type checking is costly and delays error detection If overriding methods are restricted to having the same parameter types and return type, the checking can be static Copyright 2012 Addison-Wesley. All rights reserved. 1-16
17 Single and Multiple Inheritance Multiple inheritance allows a new class to inherit from two or more classes Disadvantages of multiple inheritance: Language and implementation complexity (in part due to name collisions) Potential inefficiency - dynamic binding costs more with multiple inheritance (but not much) Advantage: Sometimes it is extremely convenient and valuable Copyright 2012 Addison-Wesley. All rights reserved. 1-17
18 Allocation and De-Allocation of Objects From where are objects allocated? If they behave like (copy values) Is deallocation explicit or implicit? Copyright 2012 Addison-Wesley. All rights reserved. 1-18
19 Dynamic and Static Binding Should all binding of messages to methods be dynamic? If none are, you lose the advantages of dynamic binding If all are, it is inefficient Allow the user to specify Copyright 2012 Addison-Wesley. All rights reserved. 1-19
20 Copyright 2012 Addison-Wesley. All rights reserved. 1-20-allocation is implicit 3 factorial + 4 factorial between: 10 and: < (3*2) + (4*3*2) < 100 true Copyright 2012 Addison-Wesley. All rights reserved. 1-21
22 Support for OOP in Smalltalk (continued) Type Checking and Polymorphism Copyright 2012 Addison-Wesley. All rights reserved. 1-22
23 Support for OOP in Smalltalk (continued) Inheritance A Smalltalk subclass inherits all of the instance variables, instance methods, and class methods of its superclass All subclasses are subtypes (nothing can be hidden) All inheritance is implementation inheritance No multiple inheritance Copyright 2012 Addison-Wesley. All rights reserved. 1-23 Greatest impact: advancement of OOP Copyright 2012 Addison-Wesley. All rights reserved. 1-24
25 Support for OOP in C++ General Characteristics: Evolved from SIMULA 67 Most widely used OOP language Mixed typing system Constructors and destructors Elaborate access controls to class entities Copyright 2012 Addison-Wesley. All rights reserved. 1-25
26 Support for OOP in C++ (continued) Inheritance A class need not be the subclass of any class Access controls for members are Private (visible only in the class and friends) (disallows subclasses from being subtypes) Public (visible in subclasses and clients) Protected (visible in the class and in subclasses, but not clients) Copyright 2012 Addison-Wesley. All rights reserved. 1-26
27 Support for OOP in C++ (continued) In addition, the subclassing process can be declared with access controls (private or public), which define potential changes in access by subclasses Private derivation - inherited public and protected members are private in the subclasses Public derivation public and protected members are also public and protected in subclasses Copyright 2012 Addison-Wesley. All rights reserved. 1-27
28 Inheritance Example in C++ class base_class { private: int a; float x; protected: int b; float y; public: int c; float z; }; class subclass_1 : public base_class { }; // In this one, b and y are protected and // c and z are public class subclass_2 : private base_class { }; // In this one, b, y, c, and z are private, // and no derived class has access to any // member of base_class Copyright 2012 Addison-Wesley. All rights reserved. 1-28
29 Reexportation in C++ A member that is not accessible in a subclass (because of private derivation) can be declared to be visible there using the scope resolution operator (::), e.g., class subclass_3 : private base_class { base_class :: c; } Copyright 2012 Addison-Wesley. All rights reserved. 1-29
30 Reexportation (continued) One motivation for using private derivation A class provides members that must be visible, so they are defined to be public members; a derived class adds some new members, but does not want its clients to see the members of the parent class, even though they had to be public in the parent class definition Copyright 2012 Addison-Wesley. All rights reserved. 1-30
31 Support for OOP in C++ (continued) Multiple inheritance is supported If there are two inherited members with the same name, they can both be referenced using the scope resolution operator Copyright 2012 Addison-Wesley. All rights reserved. 1-31
32 Support for OOP in C++ (continued) Dynamic Binding A method can be defined to be virtual, which means that they can be called through polymorphic variables and dynamically bound to messages A pure virtual function has no definition at all A class that has at least one pure virtual function is an abstract class Copyright 2012 Addison-Wesley. All rights reserved. 1-32
33 Support for OOP in C++ (continued) Evaluation++ Copyright 2012 Addison-Wesley. All rights reserved. 1-33
34 Support for OOP in Java Because of its close relationship to C++, focus is on the differences from that language General Characteristics All data are objects except the primitive types All primitive types have wrapper classes that store one data value All objects are heap-dynamic, are referenced through reference variables, and most are allocated with new A finalize method is implicitly called when the garbage collector is about to reclaim the storage occupied by the object Copyright 2012 Addison-Wesley. All rights reserved. 1-34
35 Support for OOP in Java (continued) Inheritance Single inheritance supported only, but there is an abstract class category that provides some of the benefits of multiple inheritance (interface) An interface can include only method declarations and named constants, e.g., public interface Comparable { } public int comparedto (Object b); Methods can be final (cannot be overriden) Copyright 2012 Addison-Wesley. All rights reserved. 1-35
36 Support for OOP in Java (continued) Dynamic Binding In Java, all messages are dynamically bound to methods, unless the method is final (i.e., it cannot be overriden, therefore dynamic binding serves no purpose) Static binding is also used if the methods is static or private both of which disallow overriding Copyright 2012 Addison-Wesley. All rights reserved. 1-36
37 Support for OOP in Java (continued) Several varieties of nested classes All can be hidden from all classes in their package, except for the nesting class Nested classes can be anonymous A local nested class is defined in a method of its nesting class No access specifier is used mybutton.addactionlistener(new ActionListener(){ public void actionperformed(actionevent e) { // do stuff here... } }); Copyright 2012 Addison-Wesley. All rights reserved.
38 Support for OOP in Java (continued) Evaluation Design decisions to support OOP are similar to C++ No support for procedural programming No parentless classes Dynamic binding is used as normal way to bind method calls to method definitions Uses interfaces to provide a simple form of support for multiple inheritance Copyright 2012 Addison-Wesley. All rights reserved. 1-38
39 Support for OOP in C# General characteristics Support for OOP similar to Java Includes both classes and structs Classes are similar to Java s classes structs are less powerful stack-dynamic constructs Copyright 2012 Addison-Wesley. All rights reserved. 1-39
40 Support for OOP in C# (continued) Inheritance Uses the syntax of C++ for defining classes A method inherited from parent class can be replaced in the derived class by marking its definition with new The parent class version can still be called explicitly with the prefix base: base.draw() Copyright 2012 Addison-Wesley. All rights reserved. 1-40
41 Support for OOP in C# Dynamic binding To allow dynamic binding of method calls to methods: The base class method is marked virtual The corresponding methods in derived classes are marked override Abstract methods are marked abstract and must be implemented in all subclasses All C# classes are ultimately derived from a single root class, Object Copyright 2012 Addison-Wesley. All rights reserved. 1-41
42 Support for OOP in C# Evaluation C# is the most recently designed C-based OO language The differences between C# s and Java s support for OOP are relatively minor Copyright 2012 Addison-Wesley. All rights reserved. 1-42
43 Support for OOP in Ruby General Characteristics Everything is an object All computation is through message passing Class definitions are executable, allowing secondary definitions to add members to existing definitions Method definitions are also executable All variables are type-less references to objects Access control is different for data and methods It is private for all data and cannot be changed Methods can be either public, private, or protected Method access is checked at runtime Getters and setters can be defined by shortcuts Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-43
44 Support for OOP in Ruby (continued) Inheritance Access control to inherited methods can be different than in the parent class Subclasses are not necessarily subtypes Dynamic Binding All variables are typeless and polymorphic Evaluation Does not support abstract classes Does not fully support multiple inheritance Access controls are weaker than those of other languages that support OOP Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-44
45 The Object Model of JavaScript General Characteristics of JavaScript Little in common with Java Similar to Java only in that it uses a similar syntax Dynamic typing No classes or inheritance or polymorphism Variables can reference objects or can directly access primitive values Copyright 2012 Addison-Wesley. All rights reserved. 1-45
46 The Object Model of JavaScript JavaScript objects An object has a collection of properties which are either data properties or method properties Appear as hashes, both internally and externally A list of property/value pairs Properties can be added or deleted dynamically A bare object can be created with new and a call to the constructor for Object var my_object = new Object(); References to properties are with dot notation Copyright 2012 Addison-Wesley. All rights reserved. 1-46
47 JavaScript Evaluation Effective at what it is designed to be A scripting language Inadequate for large scale development No encapsulation capability of classes Large programs cannot be effectively organized No inheritance Reuse will be very difficult Copyright 2012 Addison-Wesley. All rights reserved. 1-47
48 Implementing OO Constructs Two interesting and challenging parts Storage structures for instance variables Dynamic binding of messages to methods Copyright 2012 Addison-Wesley. All rights reserved. 1-48
49 Instance Data Storage Class instance records (CIRs) store the state of an object Static (built at compile time) If a class has a parent, the subclass instance variables are added to the parent CIR Because CIR is static, access to all instance variables is done as it is in records Efficient Copyright 2012 Addison-Wesley. All rights reserved. 1-49
50 Dynamic Binding of Methods Calls Methods in a class that are statically bound need not be involved in the CIR; methods that will be dynamically bound must have entries in the CIR Calls to dynamically bound methods can be connected to the corresponding code thru a pointer in the CIR The storage structure is sometimes called virtual method tables (vtable) Method calls can be represented as offsets from the beginning of the vtable Copyright 2012 Addison-Wesley. All rights reserved. 1-50
51 CIR with single inheritance (1) public class A { public int a, b; public void draw () { } public void area () { } } public class B extends A { public int c, d; public void draw () { } public void sift () { } } Copyright 2012 Addison-Wesley. All rights reserved. 1-51
52 CIR with single inheritance (2) B Copyright 2012 Addison-Wesley. All rights reserved. 1-52
53 Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-53
54 Reflection (continued) Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-54
55 Reflection in Java Limited support from java.lang.class Java runtime instantiates an instance of Class for each object in the program The getclass method of Class returns the Class object of an object float[] totals = new float[100]; Class fltlist = totals.getclass(); Class stg = hello.getclass(); If there is no object, use class field Class stg = String.class; Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-55
56 Reflection in Java (continued) Class has four useful methods: getmethod searches for a specific public method of a class getmethods returns an array of all public methods of a class getdeclaredmethod searches for a specific method of a class getdeclaredmethods returns an array of all methods of a class Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-56
57 Reflection in Java (continued) The Method class defines the invoke method, which is used to execute the method found by getmethod Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-57
58 Downsides of Reflection Performance costs Exposes private fields and methods Voids the advantages of early type checking Some reflection code may not run under a security manager, making code nonportable Copyright 2017 Pearson Education, Ltd. All rights reserved. 1-58
59 Summary systems (hybrid) Java is not a hybrid language like C++; it supports only OOP C# is based on C++ and Java Ruby is a relatively recent pure OOP language; provides some new ideas in support for OOP Implementing OOP involves some new data structures Reflection is part of Java and C#, as well as most dynamically types languages Copyright 2012 Addison-Wesley. All rights reserved. 1-59
60 Class A { static int Sv = 0; int Nv = 0; public static void Sm() {} public void Nm() {} } A.Sm(); v2 v1 A v1 = new A(); v1.nm(); Nv Sv Sm() Nv A v2 = new A(); V1.Nv = 1; Nm() Copyright 2012 Addison-Wesley. All rights reserved. 1-60
Chapter 5 Names, Bindings, Type Checking, and Scopes
Chapter 5 Names, Bindings, Type Checking, and Scopes Chapter 5 Topics Introduction Names Variables The Concept of Binding Type Checking Strong Typing Scope Scope and Lifetime Referencing Environments Named
2. Names, Scopes, and Bindings
2. Names, Scopes, and Bindings Binding, Lifetime, Static Scope, Encapsulation and Modules, Dynamic Scope Copyright 2010 by John S. Mallozzi Names Variables Bindings Binding time Language design issues
Glossary of Object Oriented Terms
Appendix E Glossary of Object Oriented Terms abstract class: A class primarily intended to define an instance, but can not be instantiated without additional methods. abstract data type: An abstraction
CSE 303 Concepts and Tools for Software Development. Magdalena Balazinska Winter 2010 Lecture 19 Inheritance (virtual functions and abstract classes)
CSE 303 Concepts and Tools for Software Development Magdalena Balazinska Winter 2010 Lecture 19 Inheritance (virtual functions and abstract classes) Where We Are We have already covered the introduction
Evolution of the Major Programming Languages
142 Evolution of the Major Programming Languages Object Oriented Programming: Smalltalk Object-Oriented: It s fundamental characteristics are: Data abstraction, Inheritance and Dynamic Binding. The essence
Object Oriented System Development with VB.NET
Chapter 1 Object Oriented System Development with Objectives In this chapter, you will: Learn about OO development and Understand object-oriented concepts Recognize the benefits of OO development Preview
Compiling Object Oriented Languages. What is an Object-Oriented Programming Language? Implementation: Dynamic Binding
Compiling Object Oriented Languages What is an Object-Oriented Programming Language? Last time Dynamic compilation Today Introduction to compiling object oriented languages What are the issues? Objects
CS 230 Programming Languages
CS 230 Programming Languages 11 / 30 / 2015 Instructor: Michael Eckmann Questions/comments? Today s Topics Chapter 6 Pointers Dangling pointer problem Memory leaks Solutions Michael Eckmann - Skidmore
Variables. CS181: Programming Languages
Variables CS181: Programming Languages Topics: Static vs. dynamic typing Strong vs. weak typing Pointers vs. references Vladimir Vacic, Christos Koufogiannakis, University of California at Riverside 2
Java (12 Weeks) Introduction to Java Programming Language
Java (12 Weeks) Topic Lecture No. Introduction to Java Programming Language 1 An Introduction to Java o Java as a Programming Platform, The Java "White Paper" Buzzwords, Java and the Internet, A Short
OBJECT ORIENTED PROGRAMMING IN C++
OBJECT ORIENTED PROGRAMMING IN C++ For Off Campus BSc Computer Science Programme UNIT 1 1. The goal of programmers is to develop software that are. A. Correct B. Reliable and maintainable C. Satisfy all
Introduction to Object-Oriented Programming
Introduction to Object-Oriented Programming Objects and classes Abstract Data Types (ADT) Encapsulation and information hiding Aggregation Inheritance and polymorphism OOP: Introduction 1 Pure Object-Oriented
Stack Allocation. Run-Time Data Structures. Static Structures
Run-Time Data Structures Stack Allocation Static Structures For static structures, a fixed address is used throughout execution. This is the oldest and simplest memory organization. In current compilers,
COS 301 Programming Languages
Sebesta Chapter 5 COS 301 Programming Languages Names, Binding and Scope The beginning of wisdom is to call things by their right names Chinese Proverb Topics Names Variables The Concept of Binding Scope
Semantic Analysis: Types and Type Checking
Semantic Analysis Semantic Analysis: Types and Type Checking CS 471 October 10, 2007 Source code Lexical Analysis tokens Syntactic Analysis AST Semantic Analysis AST Intermediate Code Gen lexical errors
C++ INTERVIEW QUESTIONS
C++ INTERVIEW QUESTIONS Copyright tutorialspoint.com Dear readers, these C++ Interview Questions have been designed specially to 1 Fundamentals of Java Programming
Chapter 1 Fundamentals of Java Programming Computers and Computer Programming Writing and Executing a Java Program Elements of a Java Program Features of Java Accessing the Classes and Class Members The
Course: Introduction to Java Using Eclipse Training
Course: Introduction to Java Using Eclipse Training Course Length: Duration: 5 days Course Code: WA1278 DESCRIPTION: This course introduces the Java programming language and how to develop Java applications
SL-110: Fundamentals of Java Revision 15 October Sun Educational Services Instructor-Led Course Description
Sun Educational Services Instructor-Led Course Description Fundamentals of Java SL-110 The Fundamentals of the Java course provides students, with little or no programming experience, with the basics of
Object Oriented Design
Object Oriented Design Kenneth M. Anderson Lecture 20 CSCI 5828: Foundations of Software Engineering OO Design 1 Object-Oriented Design Traditional procedural systems separate data and procedures,
Advanced Data Structures
C++ Advanced Data Structures Chapter 8: Advanced C++ Topics Zhijiang Dong Dept. of Computer Science Middle Tennessee State University Chapter 8: Advanced C++ Topics C++ 1 C++ Syntax of 2 Chapter 8: Advanced
Keywords Compared in Various Languages
Keywords Compared in Various Languages Visual Studio 2010 This topic lists common programming tasks that can be summarized with a language keyword. For more information about tasks that need code examples,
Course 10550A: Programming in Visual Basic with Microsoft Visual Studio 2010 OVERVIEW
Course 10550A: Programming in Visual Basic with Microsoft Visual Studio 2010 OVERVIEW About this Course This course teaches you Visual Basic language syntax, program structure, and implementation by using
1. Overview of the Java Language
1. Overview of the Java Language What Is the Java Technology? Java technology is: A programming language A development environment An application environment A deployment environment It is similar in syntax
PHP Object Oriented Classes and objects
Web Development II Department of Software and Computing Systems PHP Object Oriented Classes and objects Sergio Luján Mora Jaume Aragonés Ferrero Department of Software and Computing Systems DLSI - Universidad
Lecture 7 Notes: Object-Oriented Programming (OOP) and Inheritance
Introduction to C++ January 19, 2011 Massachusetts Institute of Technology 6.096 Lecture 7 Notes: Object-Oriented Programming (OOP) and Inheritance We ve already seen how to define composite datatypes
COMP 356 Programming Language Structures Notes for Chapter 10 of Concepts of Programming Languages Implementing Subprograms.
COMP 356 Programming Language Structures Notes for Chapter 10 of Concepts of Programming Languages Implementing Subprograms 1 Activation Records activation declaration location Recall that an activation,
Review questions for Chapter 9
Answer first, then check at the end. Review questions for Chapter 9 True/False 1. A compiler translates a high-level language program into the corresponding program in machine code. 2. An interpreter is
Course 10266A: Programming in C# with Microsoft Visual Studio 2010
Length Five days About this Course The course focuses on C# program structure, language syntax, and implementation detailswith.net Framework 4.0. This course describes the new enhancements in the C# 4.0
Java Programming Language
Lecture 1 Part II Java Programming Language Additional Features and Constructs Topics in Quantitative Finance: Numerical Solutions of Partial Differential Equations Instructor: Iraj Kani Subclasses and
Inheritance in Programming Languages
Inheritance in Programming Languages Krishnaprasad Thirunarayan Metadata and Languages Laboratory Department of Computer Science and Engineering Wright State University Dayton, OH-45435 INTRODUCTION Inheritance
16 Collection Classes
16 Collection Classes Collections are a key feature of the ROOT system. Many, if not most, of the applications you write will use collections. If you have used parameterized C++ collections or polymorphic
Infrastructure that supports (distributed) componentbased application development
Middleware Technologies 1 What is Middleware? Infrastructure that supports (distributed) componentbased application development a.k.a. distributed component platforms mechanisms to enable component communication
Objective C and iphone App
Objective C and iphone App 6 Months Course Description: Understanding the Objective-C programming language is critical to becoming a successful iphone developer. This class is designed to teach you a solid
Language Examples of ADT: C++
Language Examples of ADT: C++ Based on C struct type and Simula 67 classes All of the class instances of a class share a single copy of the member functions Each instance of a class has its own copy
Curriculum Map. Discipline: Computer Science Course: C++
Curriculum Map Discipline: Computer Science Course: C++ August/September: How can computer programs make problem solving easier and more efficient? In what order does a computer execute the lines of code
Polymorphism. Problems with switch statement. Solution - use virtual functions (polymorphism) Polymorphism
Polymorphism Problems with switch statement Programmer may forget to test all possible cases in a switch. Tracking this down can be time consuming and error prone Solution - use virtual functions (polymorphism)
COMP1008 Other OO Languages C++ and Ruby
COMP1008 Other OO Languages C++ and Ruby Agenda Categories of Object-Oriented Languages Type Checking C++ Ruby 2 Other Object-Oriented Languages Many OO languages exist. Only a minority are in widespread
Compiling Scala to LLVM
Compiling Scala to LLVM Geoff Reedy University of New Mexico Scala Days 2011 Motivation Why Scala on LLVM? Compiles to native code Fast startup Efficient implementations Leverage LLVM optimizations/analyses
CS 111 Classes I 1. Software Organization View to this point:
CS 111 Classes I 1 Software Organization View to this point: Data Objects and primitive types Primitive types operators (+, /,,*, %). int, float, double, char, boolean Memory location holds the data Objects
Implementation Aspects of OO-Languages
1 Implementation Aspects of OO-Languages Allocation of space for data members: The space for data members is laid out the same way it is done for structures in C or other languages. Specifically: The data
Overview. Elements of Programming Languages. Advanced constructs. Motivating inner class example
Overview Elements of Programming Languages Lecture 12: Object-oriented functional programming James Cheney University of Edinburgh November 6, 2015 We ve now covered: basics of functional and imperative
UML for C# Modeling Basics
UML for C# C# is a modern object-oriented language for application development. In addition to object-oriented constructs, C# supports component-oriented programming with properties, methods and events.
Description of Class Mutation Mutation Operators for Java
Description of Class Mutation Mutation Operators for Java Yu-Seung Ma Electronics and Telecommunications Research Institute, Korea ysma@etri.re.kr Jeff Offutt Software Engineering George Mason University
Java Interview Questions and Answers
1. What is the most important feature of Java? Java is a platform independent language. 2. What do you mean by platform independence? Platform independence means that we can write and compile the java
General Introduction
Managed Runtime Technology: General Introduction Xiao-Feng Li (xiaofeng.li@gmail.com) 2012-10-10 Agenda Virtual machines Managed runtime systems EE and MM (JIT and GC) Summary 10/10/2012 Managed Runtime
Concepts of Programming Languages. Robert W. Sebesta
Concepts of Programming Languages Robert W. Sebesta Chapter 1 Preliminaries Reasons for studying the underlying concepts of programming languages The Study of Programming Languages Increases our ability
Object Oriented Programming and the Objective-C Programming Language 1.0. (Retired Document)
Object Oriented Programming and the Objective-C Programming Language 1.0 (Retired Document) Contents Introduction to The Objective-C Programming Language 1.0 7 Who Should Read This Document 7 Organization
Understanding Valgrind memory leak reports
Understanding Valgrind memory leak reports Aleksander Morgado aleksander@es.gnu.org Thanks to the development team of Azetti Networks not only for supplying so many example memory leaks, but also for their
The C Programming Language course syllabus associate level
TECHNOLOGIES The C Programming Language course syllabus associate level Course description The course fully covers the basics of programming in the C programming language and demonstrates fundamental programming
SOFTWARE ENGINEERING 2: OBJECT ORIENTED SOFTWARE ENGINEERING
SOFTWARE ENGINEERING 2: OBJECT ORIENTED SOFTWARE ENGINEERING 1. This is a general question about Object Oriented Software Engineering. a) Compare and contrast how software complexity is handled in Structured
Avancerad programmering i C++ 1
Polymorphism Polymorphism and virtual Polymorphism - many forms In C++, polymorphism is implemented through virtual. Virtual (and so, of course, polymorphism) have a meaning only in the context of inheritance.
TOWARDS A GREEN PROGRAMMING PARADIGM FOR MOBILE SOFTWARE DEVELOPMENT
TOWARDS A GREEN PROGRAMMING PARADIGM FOR MOBILE SOFTWARE DEVELOPMENT Selvakumar Samuel Asia Pacific University of Technology and Innovation Technology Park Malaysia 57000 Bukit Jalil, Malaysia. Email:
CSC 551: Web Programming. Spring 2004
CSC 551: Web Programming Spring 2004 Java Overview Design goals & features platform independence, portable, secure, simple, object-oriented, Programming models applications vs. applets vs. servlets intro and
Syllabus for CS 134 Java Programming
- Java Programming Syllabus Page 1 Syllabus for CS 134 Java Programming Computer Science Course Catalog 2000-2001: This course is an introduction to objectoriented programming using the Java language.
CONSOLE APPLICATION USING C#.NET
Microsoft Visual Studio 2010 CONSOLE APPLICATION USING C#.NET 4.0 Module 1:.Net Architecture 4.0 Introduction to.net Framework Installing.Net Framework SDK Base Class Library Common Language Specification
Semester Review. CSC 301, Fall 2015
Semester Review CSC 301, Fall 2015 Programming Language Classes There are many different programming language classes, but four classes or paradigms stand out:! Imperative Languages! assignment and iteration!
KITES TECHNOLOGY COURSE MODULE (C, C++, DS)
KITES TECHNOLOGY 360 Degree Solution info@kitestechnology.com technologykites@gmail.com Contact: - 8961334776 9433759247 9830639522.NET JAVA WEB DESIGN PHP SQL, PL/SQL
COMPARISON OF OBJECT-ORIENTED AND PROCEDURE-BASED COMPUTER LANGUAGES: CASE STUDY OF C++ PROGRAMMING
COMPARISON OF OBJECT-ORIENTED AND PROCEDURE-BASED COMPUTER LANGUAGES: CASE STUDY OF C++ PROGRAMMING Kuan C. Chen, Ph.D. Assistant Professor Management Information Systems School of Management Purdue University
Java EE Web Development Course Program
Java EE Web Development Course Program Part I Introduction to Programming 1. Introduction to programming. Compilers, interpreters, virtual machines. Primitive types, variables, basic operators, expressions,
Chapter 13 - Inheritance
Goals Chapter 13 - Inheritance To learn about inheritance To understand how to inherit and override superclass methods To be able to invoke superclass constructors To learn about protected and package
An Introduction to the Java Programming Language History of Java
An Introduction to the Java Programming Language History of Java In 1991, a group of Sun Microsystems engineers led by James Gosling decided to develop a language for consumer devices (cable boxes, etc.).
Concepts and terminology in the Simula Programming Language
Concepts and terminology in the Simula Programming Language An introduction for new readers of Simula literature Stein Krogdahl Department of Informatics University of Oslo, Norway April 2010
Java 6 'th. Concepts INTERNATIONAL STUDENT VERSION. edition
Java 6 'th edition Concepts INTERNATIONAL STUDENT VERSION CONTENTS PREFACE vii SPECIAL FEATURES xxviii chapter i INTRODUCTION 1 1.1 What Is Programming? 2 J.2 The Anatomy of a Computer 3 1.3 Translating
DESIGN AND IMPLEMENTATION
Building a Persistent Object Store using the Java Reflection API Arthur H. Lee and Ho-Yun Shin Programming Systems Laboratory Department of Computer Science Korea University Seoul, Korea +82-2-3290-3196
Course Title: Software Development
Course Title: Software Development Unit: Customer Service Content Standard(s) and Depth of 1. Analyze customer software needs and system requirements to design an information technology-based project plan. | http://docplayer.net/29206105-Chapter-12-support-for-object-oriented-programming-isbn.html | CC-MAIN-2019-04 | en | refinedweb |
C# 4.0 dynamic Keyword
This article is about "dynamic" keyword added to C# 4.0 and It is similar to "var" keyword in C# 3.0. It is a static type and Acts as a placeholder for Object/Field which is not known till runtime. The Type will be assigned only at the runtime.
About dynamic Keyword
1. "dynamic" is a new keyword added to C# 4.0
2. It is similar to "var" keyword in C# 3.0.
3. It is a static type.
4. It acts as a placeholder for Object/Field which is not known till runtime.
5. Type will assigned only at runtime
dynamic variable declaration
dynamic regNo = 10;
dynamic sname = "Sanjay";
Note: Datatype will be assigned to the variable at runtime, based on value stored in the variable.
GetType() Method is used to know the dynamic value's datatype.
dynamic value Conversion
dynamic values to other types are easy.
Example:
dynamic regNo = 10; // dynamic type
int intrNo = regNo; // Converting to integer type
dynamic sname = "Sanjay"; // dynamic type
string strN = sname; // Converting to String Type
Binding the object at runtime
1. Any object reference can be applied to the dynamic type object, later
dynamic object is used to invoke the method from the class.
Example:
//Define a class called calculator
using System;
class Calculator
{
public int Add(int a, int b)
{
return a + b;
}
static void Main(string[] args)
{
// statically binded
Calculator objC = new Calculator();
Console.WriteLine("Sum = " + objC.Add(10, 50));
// dynamically binded object
//Assign Calculator Class object reference to dynamic object dobjC
dynamic dobjC = new Calculator();
//Invoke the method using dynamic object.
Console.WriteLine("Sum " + dobjC.Add(10, 30));
}
} | http://www.dotnetspider.com/resources/43299-C-dynamic-Keyword.aspx | CC-MAIN-2019-04 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.